review_id,review,rating,decision midl20_1_1,"""This paper presents an adversarial domain adaptation method for retinopathy detection. The idea is to extract invariant and discriminative characteristics shared by different domains for the application of cross-domain OCT image classification task. The application studied in this paper is interesting and potentially significant. The application studied in this paper is interesting and potentially significant. The proposed network utilizes several components, which generally makes senses. Overall this paper is presented clearly. 1. The novelty of this paper is quite limited. There are many unsupervised domain adaptation methods, which could be directly applied to retinopathy detection. For example, Conditional Adversarial Domain Adaptation Adversarial Discriminative Domain Adaptation Unsupervised Domain Adaptation with Adversarial Residual Transform Networks DART: Domain-Adversarial Residual-Transfer Networks for Unsupervised Cross-Domain Image Classification Unsupervised Domain Adaptation with Residual Transfer Networks It is better to provide more discussion to these literatures. 2. Experiments seems to be quite insufficient and inconvincible. There is only one valid dataset for retinopathy detection. If this is the only available dataset, repeated experiments with cross-validation are needed. The novelty of this paper is quite limited given a large number of adversarial domain adaptation methods. And the experimental evaluation is insufficient, since there is only one dataset used and there is no significance test. """,2,1 midl20_1_2,"""This paper proposes a methodology to address a cross-domain retinopathy classification task. For doing so, a feature generator, a Wasserstein distance estimator, a domain discriminator, and a classifier were included in the model to enforce the extraction of domain invariant representations. The problem of designing methods for reducing the performance drop in OCT images from a different vendor is relevant for automated OCT imaging analysis. - The problem of designing methods for reducing the performance drop in OCT images from a different vendor is relevant for automated OCT imaging analysis. The proposed method is new and it was compared with up to date baseline methods. The contribution of the paper is not clear. Also the feature visualization showcased at the results section is difficult to evaluate. The paper does not cite research on the OCT cross-domain adaptation for different OCT devices. For instance, in Reducing image variability across OCT devices with unsupervised unpaired learning for improved segmentation of retina (Romo et al., 2019), the cross-domain adaptation for segmentation tasks was achieved by using the cycleGAN algorithm. The topic covered by the paper is relevant to the OCT imaging community. However, there are considerable gaps in the presentation of the results, description of the methods and analysis of the results.""",3,1 midl20_1_3,"""Presenting a deep neural network to classify retinopathy in OCT images, the authors propose a domain adaptation method that combines an adversarial domain discriminator with a Wasserstein distance minimization. Experiments on OCT images and on handwritten digits suggest that this approach works better than alternative methods. * The experiments seem well-designed, with a separate evaluation set. * The method is compared with multiple competing methods, showing an improved performance on both problems. * The application to retinopathy detection seems fairly novel. * The authors provide some t-SNE-based analysis of the network output. * Although there is a comparison with alternative methods, I would have liked to see an ablation study in which the authors compared versions of their own method. This would allow us to evaluate the contributions of the discriminator and the Wasserstein distance separately. * The novelty of the methods is not extremely clear. How is this different from Shen et al. 2017? * The authors refer to their feature extractor as a ""generator"". I don't think this is a term that fits here: a generator usually refers to a model that outputs something like an input image. I would suggest to just call this an encoder. * The paper makes a sloppy impression at times, as if it has been put together in a hurry (see the detailed comments for some examples). It would be good to do a careful proofreading of the text. I think this is a reasonable paper. The application is interesting, the experiments, the method is evaluated on two datasets, there is some analysis of the results, and the method is compared with alternative approaches.""",3,1 midl20_1_4,"""This paper proposes a domain adaptation method for retinopathy detection from OCT images. It is based on a general domain adaptation framework aiming to learn domain-invariant features with a straightforward combination of a Wasserstein distance estimator and a domain discriminator. Evaluation is conducted with public digits datasets and private OCT datasets with good results obtained. - The paper tackles the problem of domain adaptation on a new application, i.e., retinopathy detection from OCT images. - The presented method is validated on the public digits datasets and private OCT datasets. - Good results are achieved. - My major concern lies in the motivation of combining a Wasserstein distance estimator with a domain discriminator. Why could their combination contribute to extract more domain-invariant features so as to improve domain adaptation performance? The authors are suggested to justify about it. - On a related note, ablation study of the Wasserstein distance estimator and the domain discriminator should be conducted, to analyze the contribution of the two components on extracting domain-invariant features. - In Table 2, it is interesting that the evaluation results of source-only model are quite different from different methods. What could be the reason for that? The proposed method obtains quite good results of source-only model. It would be great to investigate what component in the model contribute to that. - The implementations of other methods in comparison are not clearly described. For the comparison with other methods on the digits datasets, are the results directly referenced from their papers? For the comparison on OCT images, how are the methods in comparison implemented? What network architectures do they use? - I found that the formulations and description of the Wasserstein distance estimator is very similar to that in WDGRL. The authors are suggested to reformulate it to avoid similarities. - In Table 2, the reference format needs to be made consistent with other references. Domain adaptation on retinopathy detection from OCT images has not been studied before. The proposed method is effective in extracting domain-invariant features and achieves good domain adaptation results. """,3,1 midl20_2_1,"""This paper presents an experimental comparison study of five methods for incorporating distance transform maps of ground truth into segmentation CNNs for medical image analysis. The V-Net is used as baseline. The test is done on two datatsets. The overall picture turns out to be mixed: There is no consistent performance enhancement when using the distance transform maps. In addition, the implementation details have remarkable effects on the final performance. The experimental work here is overall well done (although with some limitation, see below). The paper is well clearly structured and easy to read. Such details like the arrows in the caption of Tables indicating which direction is better is a real help. The authors give a realistic picture of the gains and limitations of the studied methods. The code, trained models and training logs will be publicly available. Such experimental studies are typically somewhat limited. This is also the case here. Only V-net is used as baseline, which is also not justified at all. The question arises to which extent the findings here generalize to other nets. Certainly, the same applies to the tasks with the related datatsets. This experimental study, although with limitations, does have some value to the research community. It will further increase the interest in investigating incorporating distance transform maps of ground truth into segmentation CNNs, in particular from a methodological perspective. In addition, the authors will make the code, trained models and training logs available, which will further support the use of such methods by other researchers.""",3,1 midl20_2_2,"""The authors summarize current findings about the incorporation of distance maps to the training of segmentation network. They also benchmark five methods in two datasets. The analysis of results and harmonizing of methods is imperfect and, as this work is a benchmark study, there is no technical novelty. I would have expected a finer analysis of the results for such a contribution. -Very relevant research topic. Try to give insights on general improvement of segmentation with CNNs. -willingness to make code open-source -they methods could have been presented in a more harmonized fashion -the quantitative analysis lacks precision (see detailed comments) -no qualitative analysis -the results are somehow inconclusive The improvement of the methods is too small to have a practical impact. The paper does not give new methodological insights. The comparison between methods could be harmonized better. """,2,1 midl20_2_3,"""This work summary the latest developments of incorporating of distance transform maps (DTM) of ground truth into segmentation CNNs, and evaluation results of five benchmark methods on two typical public datasets. As a summary, the authors divided the five methods into two categories: new loss functions with distance transform maps and additional auxiliary tasks with distance transform maps. They then evaluated these five methods with left atrial MRI and liver tumor CT datasets. The conclusion is that incorporating DTMto segmentation can improve performance, but not always in some methods. Comparative evaluations of parts of the-state-of-the-art methods on well-known public dataset are presented. These results help to develop new methodologies and application systems. Especially, all most of the selected five benchmark methods are not open source. Furthermore, authors presented results of the best hyperparameter search as appendix. No theoretical insights to the results of comparisons. What lead the different results in two datasets is still unclear for readers, even though I know this requirement is difficult for deep-learning-based approaches. The selection criteria of methods and survey style are both unclear. Reporting of comparative evaluations of the-state-of-the-methods of incorporating distance transform maps to segmentation CNN in different two public datasets is worthy for presentation. Especially, it includes non-open-source new methods. """,4,1 midl20_2_4,"""This paper proposes to analyze various ways of embedding distance maps (DM) to improve deep-learning based image segmentation. The authors compare several approaches of the literature using two segmentation datasets and confirm that it is a useful information to add during training. Results seem to suggest that it is preferable to learn to generate the DM rather than to use DM in an additional cost function. Moreover the authors show that regressing the signed distance function is preferable over generating a distance transform map. The paper is well written and easy to follow. In my opinion it is a good match as it is an important message to convey to the community that, in general, adding distance-based information improves segmentation results. The authors have performed experiments on only two datasets. Using more open segmentation datasets such as for example those from the medical image decathlon would have strengthened the significance of the conclusions drawn. This is a simple and clear study on the benefits of distance maps in deep-learning based segmentation. Because the conclusions drawn are very general, this is I believe a good match for a conference such as MIDL. I understand it is not a major methodological contribution, but it is a simple and clear message to convey to the community.""",4,1 midl20_3_1,"""The authors claim to have developed a method to generate an ADC image from T2Weighted prostate MRI. The assumption is that T2-weighted imaging contains information to generate ADC images. I find this assumption absurd. Prostate MRI is well researched and the information in t2 and DWI imaging is clearly distinct and both are required. Depending on the zone either T2 or DWI is required to diagnose prostate cancer. (Read PIRADS). Why not claim this for all MRI imaging? Just acquire one sequence and you'll generate all other images of a patient!""",1,0 midl20_3_2,"""- Good summary of clinical problem in prostate cancer and need for ""hybrid"" ADC map with more structural information - Use of GANs to ""translate"" between ADC and T2 maps, but the exact logic underlying what the GAN is optimizing for is not well explained. - Impact of post-processing of T2w MRI is unclear. - No quantitative evaluation, hard to tell how effective the approach is.""",2,0 midl20_3_3,"""Pros: The paper presents an excellent clinical motivation for the work and introduces the problem really well. Cons: However, there isn't anything methodologically new here. Its just an application of cycleGAN to this problem with the addition of Boundary loss (BCE loss) presented in the appendix. Application of a well-known technique to this problem is not completely unreasonable if there are sufficient results. The experimental result is only one qualitative example. Its unclear how good this qualitative result is. For instance, would a radiologist use the generated ADC in place of the acquired ADC map for MRI interpretation? This is fundamentally important and crucial for this approach. Even if it wasnt possible to have radiologists assessment, it would be interesting to see how the generated ADC improves over acquired ADC in prostate cancer classification. This is not presented either. """,1,0 midl20_4_1,"""key ideas: The basic idea of the paper is to use adversarial learning to ensure small feature survival in accelerated/undersampled MRI reconstruction. Essentially the clinical issue is that when looking at the best and worst performing reconstruction models with deep learning in terms of qualitative measures (SSIM) and radiologists' image quality assessment, these models fail in reconstructing some relatively small abnormalities. These are then defined as false negatives. The paper tries to mitigate the miss of these by employing adversarial learning. experiments: The experiments are performed on the well-established OpenSource fastMRI dataset that consists of knee MRI images with a single-coil setting, including 4x and 8x acceleration factors. The authors embed synthetic '`false-negative adversarial feature' (FNAF), a perceptible small feature that is present in the ground truth MRI but has disappeared upon MRI reconstruction via a learning model, in their data and see if models (U-net and I-RIM) improve in the FNAF retrieval under attack. While the approach is very interesting and also seems to work on the synthetic data in terms of achieving the same quantitative measures (SSIM and PSNR) with a lower attack rate, on real data aka compared to a radiologist the results in Table 4 show that the FNAF-robust U-Net is only marginally better out of the small number of abnormalities found. significance: Dealing with accelerated MRI is a very important clinical problem and especially the issue of missing small features is of even bigger clinical importance. Hence I find this paper makes an important contribution to the field. -The paper is well written and the problem as well as its clinical importance clearly stated - The method section is extensive and hence also clear -The synthetic evaluation is done carefully and with the real world scenario in mind -While the approach is very interesting and also seems to work on the synthetic data in terms of achieving the same quantitative measures (SSIM and PSNR) with a lower attack rate, on real data aka compared to a radiologist the results in Table 4 show that the FNAF-robust U-Net is only marginally better out of the small number of abnormalities found. - In general experiments and results fall a bit short, there's no explicit discussion - The generalization to real-world abnormalities is really what is missing. The authors do not discuss this much, but point to it as future work. As stated above dealing with accelerated MRI is a very important clinical problem and especially the issue of missing small features is of even bigger clinical importance. Hence I find this paper makes an important contribution to the field. Even though the evaluation on the real data set aka compared to a radiologist has until now only marginal improvements, this is something worthwhile to look at.""",4,1 midl20_4_2,"""This is an interesting paper addressing the ""false negatives"" (i.e. pathologies which are not properly reconstructed) in deep learning-based MRI reconstruction. During the MedNeurIPS conference, the organizers of the FastMRI challenge have shown that even the best algorithms have considerable disadvantages: they are able to remove pathologies such as meniscal tears and subchrondral osteophytes in the reconstructed image. This is clearly a disadvantage and limits the acceptance of these techniques, and the authors attempt to solve this by using adversarial techniques to improve reconstruction. * The authors hypothesise that there can be two reasons pathologies are not properly reconstructed: as they are small they might be in the higher frequencies and therefore more likely to be filtered by the sampling scheme (which was my assumption) or they are not properly reconstructed. The authors give evidence towards the latter, which is kind of surprising! * The authors show that adversarial training by inserting ""hotspots"" of a few voxels into the image helps in reconstructing the above mentioned artefacts * Radiologist involved in the study * Tested both the iRIM and the U-net reconstruction * Results are not that affected much in terms of SSIM and PSNR. * The paper is not that clearly written. To begin with, the community surely knows what an adversarial example is as everyone will know the examples of having stickers on road signs giving completely different predictions, but it is harder to understand in this context. The link with adversarial examples is not that obvious. You might want to consider rewriting it a bit and mention that these ideas are inspired by adversarial examples. For instance, what is the relevance of 2.3? * The loss is L2 while (iirc) all the winning solutions used a SSIM loss or L1 + SSIM. This is not the same as the solution of the FastMRI challenge as with the iRIM the models all perform better when using L1/SSIM. Perhaps this method works better with L2? The paper is excellent in content, and even though it can definitely be improved by rewriting it significantly, this does not affect my score as this addresses a very important question and helps the field of machine learning reconstruction in MRI make a step forward.""",4,1 midl20_4_3,"""This paper present an application specific approach to obtain more robust reconstruction deep learning models for MRI. The goal of the reconstruction model is to reconstruct full MRIs from sub-sampled MRIs. To obtain a more robust model, artificial features are added to the images. These adversarial features are constructed such that they maximize the reconstruction model error. - The paper presents an interesting approach to improve reconstruction models robustness by leveraging domain specific knowledge. - The results, especially Table 5, look promising as more visual features were preserved using the robust model. - Many details are missing in the paper, making it difficult to understand what was done precisely. Given that some sections are unnecessary (2.3 and 2.4), there was room to put these details. - The training protocol with random search is data-augmentation and should be presented as such. This data augmentation is simply domain specific. - The quantitative results seem incomplete and/or wrong: it is mentioned that the experiments are conducted on 4x and 8x accelerations but the Table 1 seems to report only 8x and the reported I-RIM results are incorrect if it is 8x acceleration (from the I-RIM paper). - Too much importance is given to some parts like Figure 2 given that the paper is already longer than 8 pages and misses other important details. Overall, this paper presents an interesting research direction and gives some evidence that the proposed strategy may help obtaining better reconstruction models for MRI. However, the paper is missing many details of formulation and implementation which are necessary in a conference like MIDL.""",2,1 midl20_4_4,"""The paper investigates two hypothesis for the false negative problem in deep learning based MRI reconstruction. The small and rare features are the most relevant in clinical diagnostic settings. The authors develop the FNAF adversarial robustness framework. Using the proposed frame work the worst case false negatives by the adversarial attacks are found and used to improve the reconstruction task. The paper proposes a new idea to improve the robustness of the networks for reconstructing the MR images. The relevant clinical application is also interesting. The results are validated using several metrics. The contributions of the paper are not clear and It's not easy to follow the text. The paper proposes an idea for robustness of the network but according to the results the improvement is marginal. It is not clear how this adversarial perturbations are generalisable to the real world data. The contributions of the paper are not clear and It's not easy to follow the text. The paper proposes an idea for robustness of the network but according to the results the improvement is marginal. It is not clear how this adversarial pertubations are generalizable to the real world data.""",2,1 midl20_5_1,"""The paper presents a method which combines a commonly used CNN (ResNet and ResNeXt) and LSTM which uses the slices as ""sequences"". The method is quite simple: the CNN output is being used as the input to an LSTM, so it is difficult to give credit to the model formulation novelty. However, it appears to outperform an existing method, but the comparing method is a random-forest-based method. While I understand that the authors avoided unfair comparisons against their method to other ensemble-based methods, I am not sure if the presented comparison against a RF-based method is a fair one. The overall paper was well written and easy to understand.""",3,0 midl20_5_2,"""Although I disagree with the claim of novelty of this method, I find the paper interesting and appropriate for MIDL presentation. I think this approach has the merit of showing a practical application where the use of LSTMs to incorporate spatial consistency for volumetric problems is appropriate. This idea has been previously used for segmentation and, as far as I know, pre-trained models were not leveraged in that work. Here, the authors propose to re-use imagenet pre-trained models, trained with slice-wise labels, and enforce consistency over volumes through a bidirectional LSTM. This has a series of advantages such as reduced compute resources needed for training and the ability of using pre-trained models from 2D applications, which are very common in computer vision. The description of datasets and methods is pretty accurate. The method is trained on the RSNA dataset published on Kaggle and tested on the GQ500 dataset. The results are convincing and they reveal the merits of the model. The model can in fact compete with methods making use of ensemble prediction and obtain comparable if not superior results. Table 1 could be omitted, so that Table 2 would fit within 3 pages. I would like to accept this paper. My rating is between weak accept (because of the limited novelty) and strong accept. """,4,0 midl20_5_3,"""This paper uses three windows of CT image to mimic a RGB image so that effective models pretrained on ImageNet can be utilized on medical images. A CNN-LSTM architecture is used to detect intracranial hemorrhage from a series of CT slices. The paper is clearly presented, and experiments on two datasets show that the proposed method works fairly well. Stregnth 1. Using three windows of CT image to mimic a RGB image, which make it possible to utilize models trained for natual images on medical images. 2. Experiments on two datasets show that the proposed idea works fairly well, especially the models trained on the bigger dataset generalize well on the smaller dataset. Weakness: 1. Methodology contribution is minimal. 2. Performance on the ICH Detection challenge fall at the postion about top 8% now (somewhere around 35th).""",3,0 midl20_5_4,"""The authors suggest to use a combination of transfer learning from pretrained CNNs with bidirectional LSTM layers to perform 3D CT volume classification in data where they assume that a pre-trained classification network should be useful. They compare their results with the leaderboard of the RSNA 2019 ICH classification challenge, where one of the two used datasets was acquired, and with the priorly reported results on the Qure.ai public head CT dataset. While in the RSNA challenge, they score somewhere between 30th and 40th position in the leaderboard, and they outperform Qure.AI by a small but consistent margin. The paper is clearly written with only few typos, concisely short, and overall easy to read. The authors cite relevant work that is close to their approach, and try to explain why and how theirs differs. I don't fully agree with the points they make, however. In particular using RGB-like images composed of differently windowed DICOM images makes little sense in general. Trainable convolutional layers prepended to the pre-trained network will likely do a better job in preprocessing the gray scale images than hand-selected window/level values, which are completely tailored to the human eye and a clinical task, not a computer eye. Also, the fact that the model is validated on a public dataset is no strong point. The authors also point out that their work is no ensemble classifier -- but they don't explain why this is a benefit. Likewise, the efficiency of training is also only interesting with the limited resources available in Kaggle, but not ""in the wild"", where training any native 3D network, even from scratch, can and should surely be conducted, if superior results can be expected. More importantly, though, I find the basic motivation of the paper (""there are no pre-trained 3D classifiers I could use, but I want to participate in the RSNA challenge"") too weak to accept it. In the light of the challenge, with limited time and limited resources, it is certainly acceptable to go with such an (unsurprising) approach. The solution apparently works well, and can be applied practically to similar whole-volume classification tasks with weak labels (i.e., those with no spatial clues). On the other hand, the motivation is not given by a clinical task, but to a certain degree by the setup of the challenge. Taking into consideration that there are more than 30 better performing solutions in the challenge, and that also the feature-engineering approach of Qure.AI is just a little bit worse, I can't see the practical benefit of the presented work. Further taking into account that the approach is not novel or surprising, overall I cannot suggest it for publication. """,2,0 midl20_6_1,"""The paper focuses on echocardiography with the target of automating the workflow. They proposed to use classifier networks and object detection networks for view detection and valve localization respectively. The method detects multiple valves simultaneously. The study covers 6 echo views and is a pilot study which shows the feasibility of valve detection with deep learning. The paper is a straightforward solution for view classification and valve detection. - The dataset size is acceptable, and could attest that the results are generalizable. - The results demonstrated in Figure 4 are promising. - There is no novelty in the method as conventional neural architectures are used for both classification and object detection. - The results in Table 1 are not acceptable. Thr mAP (IoU:0.75) are close to 0.05-0.3 which are not desirable. - The dataset is not public, so, the results are not reproducible - There is no comparison with other deep learning (or even not deep learning) methods for valve detection in echo - There is no comparison with the plenty of echo view classification works. The paper does not add any value to the community and its audience. There is no novelty in neither the methodology nor the application. The results as well are not promising. And there is no comparison with other methods on the proprietary dataset.""",2,1 midl20_6_2,"""The authors propose a deep learning based pipeline to classify views of the human heart in ultrasound images and localise valves (three valves, AV, MV and TV) in the ultrasound image planes. The authors go to the lengths of explaining the medical foundation justifying the importance of their approach and they supplement that with data about the impact of cardiovascular diseases related to heart valves. The authors use off the shelf deep learning models and methods for their approach, and they perform a pipeline of view classification and object detection in cascade. the choice of object detection model depends on the view classification. The results are shown on a rather large dataset. The authors present a complete pipeline for valve detection. The discussion is sound, sometimes maybe a bit verbose, but the details presented in the paper are definitely all consistent and the methods used in this work are appropriate for the task at hand. The paper offers some interesting details straight out from the clinical domain. These details contribute to clarify some of the peculiarities about echocardiography that need to be noted when building methods to analyse such images. The authors evaluate their method on a rather large dataset. The authors claim this work is novel since object detection has not been applied to ultrasound images together with view classification. I do believe that this is a quite articulated novelty claim, which could be actually omitted in the abstract (maybe mentioned elsewhere instead). Something that I personally find strange is that the authors change the geometry of the ultrasound image from a fan shaped area over a black background to a ""cartesian grid"". this changes the geometry of the ultrasound image. Despite the fan shaped image area, structures being imaged are not distorted. By changing the geometry to a ""cartesian grid"" distortions are actually introduced! This is evident in some of the images in Figure 1. I wonder why this pre-processing is necessary. The through description of the Inception V3 network is not really necessary. it is very difficult to follow, and it does not bring much. I am unsure if the authors actually changed anything in Inception V3 (it should not be necessary to change anything!), if no change was done it is sufficient to state ""we use off the shelf Inception V3 network"". There seems to be no comparison of the results with other state of the art algorithm or other works in literature that tackle the same problem, which is - unfortunately - a serious issue for a conference paper. The paper seems to be a re-adaptation of a (journal?) paper meant for a medical audience. It is definitely a paper that has merit, because it is clear that there was an investment of time and work by the authors. Unfortunately, the lack of direct comparison with previous works that have been tackled a similar problem reduces its strength in the context of a computer science conference. The paper has important notions and information about echocardiography that are definitely interesting to the MIDL community. All in all, the work has merit and I regard it as borderline. I think that the angle that the authors have presented over this problem (and its solution) is not matching the topic of MIDL as a conference and a community. There are a lot of clinical information in the manuscript, but not much discussion about the computing aspects of it and the deep learning technique applied.""",2,1 midl20_6_3,"""This paper describes a two-stage pipeline for detecting heart valves (mitral, tricuspid and aortic) from echocardiograms. The first stage is a view classification step that classifies an image as belonging to one of 11 different classses, and utilises a Inception-V3 classification architecture. The second step is an object detection model based on Faster-RCNN which is used to detect the location of each of the valves. A large dataset of over 11,000 B-mode clips was used to train and validate the model. This is the first paper to my knowledge that demonstrates detection of heart valves in ultrasound video. The heart valves are very important to inspect in cardiac ultrasound in order to detect abnormalities such as stenoses or atresia. A large number of different cardiac views are considered and classified with very high accuracy. Performance of valve detection is also good in several views. The methodology is well justified and the paper is clearly written. The experimental validation is solid. Many other works have used CNNs for classification of views in echocardiograms, or have used object detection for detection of objects in ultrasound images or videos. Therefore the technical novelty of this work is limited. The clinical applicability of heart valve detection alone is relatively limited, as the valves are not typically difficult for a sonographer to identify. While this is a solid paper with good results, in my view the limited technical novelty and limited immediate clinical impact mean that it falls slightly short of the level of acceptance for this conference.""",2,1 midl20_6_4,"""In this work, the authors address two issues in the automation of cardiac ultrasound images: view classification and valve detection. They collect an extensive dataset and apply standard Neural Network architectures to solve these problems. The results seem to be satisfying. The paper includes a broad background on the application as well as a pragmatic preprocessing pipeline. The paper is very well written and organized. I did not have any challenges to understand the methods. The authors really cared for making every detail so clear that the reader has no issues to reimplement the method. This is definitely worth mentioning! Additionally, the preprocessing and splitting seems to be tricky, and the authors found a good way to work with the collected data. My main concern with the paper is the questionable novelty. The view classification was done before (as stated in the related work section) as well as the valve detection, even in 3D (not stated in the paper, see Ghesu et al. 2016 (pseudo-url)). Unfortunately, the authors do not compare their results to the previously published methods and only provide one result for one standard architecture. This reduces the scientific value of the study, as the authors only show that a previously solved problem can also be solved with well-known methods, without relating these results to the previously published ones. All in all, the paper does not contain enough scientific value to accept it in the current form. However, I would be completely willing to vote for an accept if the authors make good use of the rebuttal period to include more comparisons of different methods. These might include previously published methods as well as other standard architectures for comparison. I think that a study focusing on applying standard methods to a specific problem can have a value and could be worth publishing, but then I expect a deeper evaluation than only throwing my data into one network and reporting the results. As mentioned above, I really like the clear style and the very comprehensive study! Hence, I would be happy to see a revised version addressing my concerns!""",2,1 midl20_7_1,"""The paper introduces an inception based network to predict gaussians on the posterior intervertebral disks. This problem has important clinical application and the results are reasonable. There are papers on labeling vertebrae in CT scans and radiographs with very similar approaches. They estimate the vertebrae location and label them using gaussian functions or heat maps. The proposed method in this abstract could be compared to those relevant publications. 1) Payer, Christian, et al. ""Integrating spatial configuration into heatmap regression based CNNs for landmark localization."" Medical Image Analysis 54 (2019): 207-219. 3) Bayat, Amirhossein, et al. ""Vertebral Labelling in Radiographs: Learning a Coordinate Corrector to Enforce Spinal Shape."" Computational Methods and Clinical Applications for Spine Imaging: 39. 4) Sekuboyina, Anjany, et al. ""Btrfly net: Vertebrae labelling with energy-based adversarial learning of local spine prior."" International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2018.""",3,0 midl20_7_2,"""This paper uses a multi-modal, publicly available dataset of 235 patients to detect intervertebral disc coordinates. The paper is well-written and addresses preprocessing and validation metrics. However, additional implementation details could have been included. 1) How was the receptive field size (r) selected? 2) It was not made clear that the network processes sub-images of size r in order to count/detect the discs in the entire image. 3) It is unclear how many patches are extracted within the each image. Why do the authors think that redundant counting produced better detection? The detection points on Fig 1. and Fig 2.A. are too small to determine the size of the predicted Gaussian functions. """,2,0 midl20_7_3,"""In the proposed approach, the authors used a fully convolutional network to localize intervertebral discs in MR images. The proposed approach has a number of serious drawbacks. (1) The proposed approached uses 2D images as the input, that is generated by the average of the six middle slices. This limits the scope of the method, due to the assumption that all centres of intervertebral discs are located in close proximity around the middle slice. It is also not clear what the slice thickness is and thus how the image generated by averaging middle slices looks like. (2) In the preprocessing step, the 2D image is straightened according to the spinal cord centreline. How is the centreline extracted? If the centreline has been already extracted finding the centres of intervertebral discs and vertebral bodies are greatly simplified and can be done with a variety of approaches that do not require the use of ML-based method. (3) The term labelling of the intervertebral disc might not be correctly used in this work. The proposed network does not distinguish intervertebral disks, but only generates the centres of visible intervertebral disks in the images. The term localization should be better. (4) There is at least one workshop each year at MICCAI that is combined with a challenge dedicated to labelling and segmentation of spine structures. Vertebral body or intervertebral disk localization and labelling is usually part of each completion. Authors should compare their methods with the SOTA method presented in these challenges. Moreover, the CSI workshop at MICCAI 2015 was dedicated to localization and segmentation of intervertebral disks from 3D T2 MRI data. """,1,0 midl20_7_4,"""The paper presents a method to localise and label intervertebral discs in spinal MRIs. The method takes in a sagittal slice of a T1-weighted MRI and produces gaussian heatmaps of possible disc locations. The paper is a bit vague in terms of how the heatmaps are labelled; do you assume the topmost heatmap is always the C2-C3 IVD or do you predict separate heatmap channels for each IVD?""",3,0 midl20_8_1,"""The paper proposes a two-stage patient-level PE prediction pipeline. Stage 1 learns to predict PE from 2D image slices using both slab-level and pixel-level annotated dataset. By aggregating the output of Stage 1 using the bidirectional Conv-LSTM layer, Stage 2 learns to predict PE at the patient-level. Experimental results for both slab-level and patient-level PE prediction are provided. The ablation study shows pixel-level annotation data is important for a major improvement in both slab-level and patient-level PE prediction. Automatic Diagnosis of Pulmonary Embolism Using an Attention-guided Framework: A Large-scale Study: The paper is nicely written, and the illustrations are well-done. In general, the paper is easy to understand. The authors have a large dataset, and clinical motivation is very clear. 1. Comparisons with other state-of-the-art methods are very weak and not fair. The authors compare the proposed method with only one method PENet (Huang et al., 2019), which has been trained on a different and much smaller dataset. In this scenario, it is difficult to judge whether the claimed performance improvement of the proposed method is due to a larger dataset or better method. In addition, no comparison with other known methods as cited. To have a fair comparison, the baseline methods must be evaluated on the same dataset and data-split. 2. The proposed method requires much expensive pixel-level annotation to predict patient-level annotation. It is mentioned in the paper that [o]ne strength of our pipeline is that it can be trained using volumetric images with only binary labels which not a methodological advantage for the proposed method only. A 3D CNN can also be trained with binary patient-level annotation only. Since the authors have a large-scale dataset, a much better approach could be training 3D CNN using patient-level annotation only and compare it, as a baseline, with the proposed method. 3. It is mentioned in the paper that [t]he attention map is then normalized by its maximum value to range between 0 and 1. This normalization should make the attention map similar for both negative and positive samples which should hurt the final prediction. 4. Using 2.5mm as slice thickness is probably too thick for detecting subsegment PE. 1. Comparisons with other state-of-the-art methods are very weak and not fair. 2. The proposed method requires much expensive pixel-level annotation to predict patient-level annotation. 3. Using 2.5mm as slice thickness is probably too thick for detecting subsegment PE. Please see the details of the weaknesses. """,2,1 midl20_8_2,"""This is a well written paper which used a two stage deep learning framework to classify contrast-enhanced CT images for the presence of PE. A 2D first stage ResNet is trained to classify slices with attention supervision. There are two loss terms: one for the classification error (categorical cross-entropy) and one for the attention (continuous dice coefficient). The second stage is a recurrent neural net based on bidirectional convolutional LSTM using the features from the last layer of the ResNet. The second stage is trained with extra cases for which only a scan level label is available. The method is validated and tested on a large dataset coming from multiple sources and shows good performance. - This is a well written paper with a clear introduction into the problem at hand - The paper provides a framework that can be used for other applications and is explained well - Large data set used for training, validation and testing - Experiments show the effect of adding extra cases with no slice-level labels but only scan-level labels. - There are some important details missing about the data. From how many hospitals did the data originate? What were the CT characteristics, such as the distribution of the slice thicknesses? The limitation that the authors did not know whether two studies belong to the same patient is risky. Do the authors know from how many patients these scans are? That would help to estimate the risk that there is leakage between train and test. - No comparison about human performance and corresponding inter- and intra-observer variability - Hard to compare with the other approaches because different dataset used, but I cannot blame the authors for this. This is a well written paper which uses a novel methodology to predict PE presence at the scan level. In addition to this, it uses a large scale dataset to train and validate the performance of the deep learning system. The method is well explained and the experiments support the claim of the paper. """,4,1 midl20_8_3,"""In this paper, the authors propose a two-stage method to diagnose pulmonary embolism (PE) in CT volumes. In stage I, PE masks are utilized to supervise the learning of attention maps during the training of a 2D classification network. In stage II, the trained network is then used to extract meaningful features for each slice, and those features are passed to a recurrent neural network for the final diagnosis of PE. Experiments on a large dataset show the effectiveness of their method in PE diagnosis. What's more important is that the diagnosis process can be visualized rather than a black box. 1. The proposed method can provide explainable features (PE attention maps) about the decision process of deep learning in PE diagnosis. It can help the doctor to understand why the method gives the current result, thus it is more acceptable to doctors compared to a regular classification model. 2. The method is trained and validated on a large-scale dataset, and is shown to be effective in PE diagnosis. 3. The paper is overall clear and easy to follow. 1. The comparison to the state-of-the-art method is not reasonable, because the two methods use different training and test data. The performance of a method may vary a lot on different datasets. 2. The downsampling of PE masks from 384x384 to 24x24 may greatly affect the actual sizes of PEs, especially for small PEs. For example, it seems that the attention score for the right small PE in the second row of Figure 3 is not good enough. 3. The description of the three scenarios in section 3.4 is not clear enough. Can be improved by summarizing them in a table. The interpretability of this method is very important to the development of AI methods in medical image analysis, especially for applying novel methods to practical medical scenarios. However, there are some issues need to be addressed. I will change to strong accept if the authors can answer my concerns.""",3,1 midl20_8_4,"""The paper describes a method to detect pulmory embolism (PE) and indicate possible PE lesions. The method was validated on a large internal dataset with a total of more than 10,000 studies for training and around 2,000 studies for testing. The proposed method consists of two main steps; step 1 focuses on generating per slice features trained on attention masks and classification labels while step 2 focuses on using the features of each slice to predict patient-level PE. - The idea is simple and easy to follow; I particularly like the idea in stage 2 to move from slice level PE to patient level PE inference - The paper is well written - Validated on a large dataset (10,000+ training, 2,000+ testing) - State-of-the-art results - The only major weakness is the lack of novelty - Minor weaknesses: Is there a possibility of a training/test leak? In section 3.1 it was mentioned that ""Note that, due to the specificc anonymization protocol used by our data provider, we are unable to determine if two studies belong to the same patient."". The paper is sound and easy to follow and was validated properly; large dataset for validation. It is well written and ideas were clearly explained and validated with proper experiments. The only gripe is the lack of novel ideas.""",3,1 midl20_9_1,"""In this paper, the authors present a deep learning model that integrates a variational auto encoder (VAE) and a generative adversarial network (GAN) to generate synthetic images conditioned by synthetic labels. The method is novel, however I do have some concerns regarding the method to ensure the consistency between the MRI used as a style image and the generated cardiac shape. The evaluation explores different experimental setups for two different open datasets available in cardiac cine-MRI (ACDC and Sunnybrook cardiac data) which makes the paper interesting to read. It would be interesting to know additional details about the data augmentation procedure used on the experiments. The observed gain in performance in the segmentation task seem to be quite large and probably significant, however comparison with alternative methods for generating synthetic cardiac-MRI images is missing. """,3,0 midl20_9_2,"""The authors combine a variational auto-encoder and a GAN to generate synthetic cine MR images along with the associated labels. They test their method on two well-known datasets showing good results. Clarity: Unfortunately there is not enough space to discuss details about the method and potential limitations (see comments at the end). There are some parts of the results that are not commented. What is fine tuning? Quality: Average. The paper would gain quality if there could be more discussion on the results. I would suggest to remove the data augmentation and fine tuning experiments, as it is not the goal of the paper, to gain place for explaining the limitations of the paper. Significance: The authors address a relevant problem. Pros: - Reported accuracy is good Cons: - You use cine MR images. This images have a temporal component. How do you guarantee that the results are consistent in time? I mean does the image in time T is consistent with that one at T+1? If you are not addressing this, at least, it should be mentioned. - Using a given dataset to generate a synthetic dataset and then use images from the original dataset for the testing (table 1) is not very challenging as a test. Most likely, there is bias. - The experiments on data augmentation and fine tuning (which is not really commented) are somehow out of place. The point of the paper is to show the value of the synthetic data generation. - The method tries to solve the problem of limited annotated data. However, GANs are methods that require a lot of data to be able to generate good results. The experiments are not sufficient to proof that it would not be the case in this setup and the authors don't really comment on it.""",2,0 midl20_9_3,"""The paper describes a method for GAN generation of cardiac MR images along with generated anatomical segmentations for these new images. The authors then train a segmentation network and demonstrate improved results (in general) when the training set is enlarged with GAN-generated images and their (generated) segmentations. The method described is interesting and should prove useful in the absence of large annotated datasets. The results seem reasonably convincing although I have some questions. Given the limitations of the short paper I think that this is an interesting and reasonably well described work. Specific comments to improve the paper are below: - The specific anatomical structures segmented are not named - The sizes of the datasets used should be mentioned, since these are clearly very important for the reader's understanding (and to save them searching the literature) - The properties of the test sets are not described - from which set do they come and how large are they? Validation data is also not described. - The term fine-tuning is used without explanation until late in the paper - the meaning of this term in the context of this work should be explained before presenting the results. - (outside the scope of this short paper) It would be interesting to experiment with combining the two public datasets . It would also be useful to see a graph of how the results improve as additional training images are added - is there a saturation point beyond which additional training images are not useful? Are the generated images more useful if there was a larger number of images (i.e. more variability) in the dataset that was used to train the generator? """,3,0 midl20_9_4,"""The authors designed a GAN based synthesised method for cardiac data segmentation. I have some concerns regarding the results of this work: for instance, the authors proposed a complicated model but the segmentation results are quite similar or even slightly worse than the data augmentation. Also GAN based synthesise was proposed and widely tested before. The novelty of the study is quite limited.""",2,0 midl20_10_1,"""The paper presents a model-independent approach to consider uncertainty which consequently helps the overall segmentation performance. It provides several concepts of uncertainty to consider during training and shows a proxy loss function to explicitly account for it. It overall provides a nice set of experiments and analyses for interesting takeaway messages regarding uncertainty. 1. Well written and easy to read. 2. The distinction between the types of scoring rules is appreciated. 3. Extensive experiments with thorough analyses. 4. Consistent improvement across several experimental setups. I do not have major comments on weaknesses. A minor comment is on self-containedness with the uncertainty map figure not being in the main paper. This is quite minor though. Another minor comment is the lack of mentioning of existing uncertainty estimation methods (e.g., MC-dropout) which could output a different type of uncertainty which may replace the softmax. It is an overall solid paper with well-written details and thorough experiments. The method is simple and reasonable, although I wonder if the softmax is the only uncertainty measure it could consider. Still, there are interesting observations from the experimental analyses that may benefit readers.""",4,1 midl20_10_2,"""The paper presents a novel method for selective segmentation that tries to maximize the performance on a practical target instead of the full training target by introducing an uncertainty loss. The proposed training scheme can be applied to most existing segmentation frameworks in a plug-and-play manner. The method was evaluated two datasets (MM-WHS and GlaS) and outperformed the baseline (without the proposed uncertainty loss) in all metrics. 1) By focusing on uncertainty, this work addresses an important direction in medical image segmentation. Unlike many other works related to uncertainty in medical image segmentation, the paper introduces a new principle, selective segmentation, which is borrowed from the classification literature. 2) The method is evaluated extensively. The paper demonstrates the benefit of the method on different metrics, two datasets, and several experiments. The multitude of experiments helps the reader to get a better understanding of the approach. Also, the hyperparameter analysis is quite helpful. 3) The problem and main terms are well-introduced (especially Figure 1) and are beneficial for the general understanding. 4) The authors provide code. I consider this is very important when introducing a new method because it improves reproducibility. Unfortunately, there are still a lot of papers presenting new methods without code. 1) In my opinion, the main weakness of the paper is the writing and the lack of clear messages. In detail, this leads to the following problems: a) Difficult to read. The paper lacks a description of the idea in simple words easy to understand. Although the method seems valid, I find it hard to follow all the details in section 2.2. An improved structure, including repetitions of the important information, would be beneficial. An example is the description of pseudo-formula (including the theorem), which is described in detail. However, the final loss does not contain pseudo-formula because of its non-differentiability. I believe this could be simplified. b) The motivation for the uncertainty loss is unclear. The softmax cross-entropy is described as a proper scoring rule which tries to recover the actual distribution pseudo-formula . It is also stated that selective segmentation does not require recovering the distribution pseudo-formula . It is unclear why one should not recover the actual pseudo-formula (even though not required) with the cross-entropy loss if this loss is anyway used to optimize the segmentation task. Or, why is the uncertainty loss even needed if the cross-entropy is already trying to do more than required? c) The benefit is not obvious. Although the experiments help to understand the method better, the benefits of the proposed method are not obvious. It seems that the initial Dice coefficient performance already improves (although c=1 not shown in Table 1) with the proposed method. However, it is not clear whether the improved performances at the subsequent coverage values (e.g., 0.95, 0.9, ) are due to an improved uncertainty or the initial benefit. It seems that the baseline has higher deltas between two consecutive coverage values. Additional clarifications of the results and a more extensive discussion of the results are required to improve the understanding. 2) The adoption of the proposed setup is limited. As described in the introduction, selective segmentation only predicts voxels that the model is certainty about, and the remaining are left for expert annotation (Figure 1). This setup is, in my opinion, not realistic. If a radiologist has to annotate all uncertain voxels in an image, the time gain compared to full manual annotation will most likely be very limited. The paper provides an interesting approach and extensive evaluation. Unfortunately, the structure and writing make the paper hard to read and understand. Therefore, I suggest rejection of the paper unless the readability is improved.""",2,1 midl20_10_3,"""1. The authors designed an interesting uncertainty-aware method for semantic segmentation and tested on cardiac and gland images. 2. The paper is well-written with some descriptions in Appendix. 3. Experiments results are significance and comparison results seem good. 4. The proposed method is novel. 1. The authors designed an interesting uncertainty-aware method for semantic segmentation and tested on cardiac and gland images. 2. The paper is well-written with some descriptions in Appendix. 3. Experiments results are significance and comparison results seem good. 4. The proposed method is novel. 1. Some details of the method is missing. 2. The reference included for the MM-WHS segmentation challenge was wrong. Zhuang, Xiahai, et al. ""Evaluation of algorithms for Multi-Modality Whole Heart Segmentation: An open-access grand challenge."" Medical image analysis 58 (2019): 101537. The authors designed an interesting uncertainty-aware method for semantic segmentation and tested on cardiac and gland images. The paper is well-written, I just have some suggestions: 1. Is there an automated and adaptive method to determine parameter c? Please elaborate more details. 2. The experiments have been done on MM-WHS challenge and please refer to the correct reference: Zhuang, Xiahai, et al. ""Evaluation of algorithms for Multi-Modality Whole Heart Segmentation: An open-access grand challenge."" Medical image analysis 58 (2019): 101537. 3. Are the results in Table 1 got statistical significance? 4. The other concern is that how the proposed framework can cope with the real clinical studies?""",4,1 midl20_11_1,"""quality: This is a well-written paper which tackles and interesting clinical problem, has a well-described framework and experiments and does a good evaluation. clarity: Of course more details would be nice, but considering the brevity of the submission framework for short papers, the paper is very clear. originality: I am not aware of the background clinical literature in this area, but it seems a novel application. significance: The significance of the clinical solution is high and the presented algorithm seems to perform well enough to actually be a possible solution in the future. pros: - the paper has a very nice twist of making the network robust, but excluding certain image modalities during training - the multi-task learning approach to learn the labels all at the same time is also very appropriate for this problem cons: - all abbreviations (OS, T1ce) should be introduced, not everyone has the same background in the clinic or in MRI to understand this - why are there so different balances in training, validation and testing data? why were they not all divided in the same way in terms of cases? CAVEAT: The authors state themselves in the abstract: ""This short paper only contains a brief summary and selection of results from a manuscript that will shortly be submitted to Neuro-Oncology."" Is this allowed according to MIDL guidelines?""",4,0 midl20_11_2,"""This paper utilized the U-net to segmentation the glioma regions and then utilize the multi-task classification model to classify the corresponding grade, IDH mutation, and 1p19q co-deletion. The task of this paper is interesting. The technical novelties of this paper is low and this paper is an application-based model. """,2,0 midl20_11_3,"""This short paper proposes a method for classification of glioma grade, IDH mutation, and 1p19q co-deletion, and trains/evaluates it on a fairly large dataset. The method also performs a segmentation, using a standard 3D U-Net trained on BraTS 2019 training data, with dropout on MRI sequences, to make the network robust to missing sequences in the application phase. Strengths: - A reasonable sized test set (100 patients). - The pipeline seems engineered well. - The classification accuracies are quite high. Weaknesses: - It is not clear how the tumor segmentation is exactly used in the classification network. Do you only use it to define a bounding box for the region of interest? Or do you mask the original image and set all pixels outside the segmentation to zero, for example? - Section 3: ""For 1p19q status we only included LGG cases"" -> it's not clear whether you did this for the train or test set, or both. - Confidence intervals should be given for the classification results in Table 1. - Too many decimals are given in Table 1. - Section 1: the relation to this work also could be discussed: pseudo-url """,3,0 midl20_11_4,"""The authors proposed a deep learning based algorithm for brain tumor segmentation with prediction of grade, IDH mutation and 1p19q co-deletion. The results are promising. However, there are two questions: 1. The authors mentioned that the network was trained and evaluated on a large heterogeneous dataset of 628 patients, collected from The Cancer Imaging Archive and BraTS 2019 databases. For my understanding the BraTS data may also include some data from The Cancer Imaging Archive. I am wondering if there will any data leaking during training and testing. 2. For the BraTS data, the author should refer to the latest benchmark: arXiv:1811.02629""",3,0 midl20_12_1,""" Summary: The authors propose an active learning method using variational auto-encoder. They assess their method by predicting structural stress within vessel wall in Intravascular ultrasound. Strengths: -The idea of navigating in the latent space using the difference of the models predict with ground truth is original and makes sense. -The article is well-written. Weaknesses: *I am unsure about the practical use of the method. The proposed method reaches its optimum performance at that same time than the baseline. You wouldnt want to consciously use a suboptimal system for medical research, would you? *The authors do not cite nor discuss relevant literature. *Some details are unclear when they could have easily been added without using additional space. Detailed comments: Experiment was repeated for five times for plotting the mean and variance (Figure 2(b)) I dont see mean and variance in Figure 2. The neural network model F is first trained for a couple of epochs with the current training set using a given loss function. How many epochs? predefined size of training samples What is the size? The authors could cite more relevant literature instead of the computer vision datasets. For example: Kingma, D.P. and Welling, M., 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Biffi, C., Oktay, O., Tarroni, G., Bai, W., De Marvao, A., Doumou, G., Rajchl, M., Bedair, R., Prasad, S., Cook, S. and ORegan, D., 2018, September. Learning interpretable anatomical features through deep generative models: Application to cardiac remodeling. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 464-471). Springer, Cham. """,3,0 midl20_12_2,"""This paper tackles an important question in machine learning on informative and efficient sampling of training data. This is somehow similar to a combination of active learning and hard negative mining, however, it is not justified in the paper what's the differences to the proposed method. In fact, the two areas are not mentioned at all. pros: + the latent space sampling is interesting, which I believe is a promising direction, although not justified in the paper. cons: - the result shown in Figure 2(b) is not convincing to me, since both methods plateau quickly with only ~32 samples. The proposed approach has a very small 'effective window' which could diminish the impact of proposed approach. - no comparison result provided to other hard negative mining methods.""",2,0 midl20_12_3,"""The paper presents an interesting idea which holds greater value in medical imaging as motivated in the paper. Not sure how much evaluation is expected in the short paper, but in my opinion, the paper lacks enough experimental evidence to support the key hypothesis. Moreover, as shown in Fig. 2b, the improvement of the proposed method over baseline is not clear or not presented clearly. Also, can authors present more examples of generated samples? It could be understood that the current paper is more about the idea but the current model relies on the generative capacity of the model and VAE are well known for producing bad samples. The authors are suggested to consider [1] to improve sample quality or discuss the effect of sample quality in the current work. [1] Diagnosing and Enhancing VAE Models [Dai and Wipf, 2019]""",2,0 midl20_12_4,"""Brief summary: VAE is used to encode the raw image to some feature representation in a latent space, the annotation suggestion is done based on the latent space. The supervised training loss provides some gradients that reach the latent space. Such gradients are used for selecting the next batch of images for annotation. Quality: Below average; Clarity: Average; Originality: New to me. Significance: In terms of the experimental results, the improvement is not significant. The proposed method is compared to a simple random selection method. Supposedly, one should get much better results when comparing to random selection method. Pros: interesting idea, interesting topic. Cons: (1) Lack of comparisons with the state-of-the-art annotation suggestion methods. (2) The proposed method relies on gradient feedback from the supervised training loss for new sample selection. Such feedback only could give you some local movements in the latent space. In this sense, the selected samples might not be the most effective samples for the active learning task. (3) Sampled new point in the latent space may not correspond to a valid image sample after applying the decoder on it. Namely, the decoder can give you noisy ""image samples"". (4) No strong justification that we should do annotation suggestion in the proposed way. """,1,0 midl20_13_1,"""This paper proposes a U-Net based architecture which segments the prostate peripheral zone (PZ) and performs detection and multi-class classification of PZ lesions. The network includes an attention mechanism that allows searching only in PZ areas. The method was tested on a dataset 98 patients all included with T2w images, and ADC maps and the results were compared to those obtained by standard U-Net architecture trained without the attention mechanism. FROC analysis was used to evaluate the proposed method. The authors reported a 75.8% sensitivity at 2.5 false positives per patient and 66% sensitivity at 2.5 false positives per patient for their method and U-Net baseline model, respectively. - The proposed architecture obtains the PZ segmentation together with lesion detection and lesion grading. Using the information on the lesion location improves the results by decreasing the number of false positives and by taking into account lesion differences among different areas - A very similar approach has already been presented at MIDL 2019 (Effect of Adding Probabilistic Zonal Prior in Deep Learning-based Prostate Cancer Detection) and not referenced - The experimental part is not robust enough. - The comparison with the baseline U-NEt is not fair; most of the CAD systems include the step of prostate segmentation. If the authors wanted to show how zonal segmentation improves CAD performance, they should have compared the results with U-Net trained on the whole prostate and not on the whole image. - The lesion grade analysis is ""to be considered with care"" (as stated by the authors) due to the small number of lesions per class and fold. The method does not have enough novelty, the authors did not reference a very similar paper. The statement of lesion grading is not supported by enough data and the experimental part lack of robustness. The authors did not show enough results to support their claim.""",1,1 midl20_13_2,"""key ideas: - multi-class deep network for to first segmentation of PZ , second detect PZ prostate lesions and third GGG grading - Input is bi-parametric MRI (ADC and T2w) in two separate decoding branches experiments, and their significance: Performance was evaluated using a large multivendor dataset correlating with hysto-pathology as ground truth, training and testing with a 5-fold cross-validation. Adequate FROC analysis and kappa statistics was conducted to evaluate the performance The dataset consists of a multi-vendor bi-parametric MRI collection acquired from prostate cancer patients. FROC analysis and kappa statistics were performed to evaluate the performance Method is clearly written Only PZ cancer, why not includes TZ? Please provide the standard deviation of the kappa statistics. How does this performance compares the PROSTATEx challenge: pseudo-url It is a good paper, but I miss the application towards the whole prostate, not just the PZ. I miss the performance evaluation of the method on a public dataset such that the performance is comparable to existing methods in literature""",3,1 midl20_13_3,"""Well described clinical problem definition of prostate MRI. Self-attention is an interesting strategy. Grouping it in different ISUP groupings is challenging. Prostate Cancer Semantic Segmentation by Gleason Score Group in mp-MRI with Self Attention Model on the Peripheral Zone Prostate Cancer Semantic Segmentation by Gleason Score Group in mp-MRI with Self Attention Model on the Peripheral Zone Prostate Cancer Semantic Segmentation by Gleason Score Group in mp-MRI with Self Attention Model on the Peripheral Zone Prostate Cancer Semantic Segmentation by Gleason Score Group in mp-MRI with Self Attention Model on the Peripheral Zone Prostate Cancer Semantic Segmentation by Gleason Score Group in mp-MRI with Self Attention Model on the Peripheral Zone Well described clinical problem definition of prostate MRI. Self-attention is an interesting strategy. Grouping it in different ISUP groupings is challenging. Well described clinical problem definition of prostate MRI. Self-attention is an interesting strategy. Grouping it in different ISUP groupings is challenging. Well described clinical problem definition of prostate MRI. Self-attention is an interesting strategy. Grouping it in different ISUP groupings is challenging. Well described clinical problem definition of prostate MRI. Self-attention is an interesting strategy. Grouping it in different ISUP groupings is challenging. Well described clinical problem definition of prostate MRI. Self-attention is an interesting strategy. Grouping it in different ISUP groupings is challenging. Not a lot of data. Well described clinical problem definition of prostate MRI. Self-attention is an interesting strategy. Grouping it in different ISUP groupings is challenging. Well described clinical problem definition of prostate MRI. Self-attention is an interesting strategy. Grouping it in different ISUP groupings is challenging. Well described clinical problem definition of prostate MRI. Self-attention is an interesting strategy. Grouping it in different ISUP groupings is challenging. Well described clinical problem definition of prostate MRI. Self-attention is an interesting strategy. Grouping it in different ISUP groupings is challenging. Well written paper Well described clinical problem definition of prostate MRI. Self-attention is an interesting strategy. Grouping it in different ISUP groupings is challenging. Well described clinical problem definition of prostate MRI. Self-attention is an interesting strategy. Grouping it in different ISUP groupings is challenging. Well described clinical problem definition of prostate MRI. Self-attention is an interesting strategy. Grouping it in different ISUP groupings is challenging. Well described clinical problem definition of prostate MRI. Self-attention is an interesting strategy. Grouping it in different ISUP groupings is challenging. """,4,1 midl20_13_4,"""The paper present a method for prostate cancer grading (aka. semantic segmentation of prostate cancer to different Gleason score group) in MRI images. The paper is focused on grading in peripheral zone which is considered as a valid clinical interest. The proposed solution also segment the boundaries of peripheral zone region and use this information as an attention mechanism to perform the following detection and grading using high level latent features of the second part of the deep network. The also validate the methods by performing 5 folds cross validation. ** Quality of evaluations: - Author provides results in 5 folds cross validation settings and also they have provided adequate visual results. ** Clarity and Relevance: - The problem is clinically relevant and clinical dataset well described. - The paper is well-written, and the experimental setup is convincing, and in general, it is easy to follow the paper. ** Justification needs for the choice of label interpretation: - From the pathological view point, the pathology report of GS x+y representing a tissue area with the combined pattern of Gleason x and y where Gleason x is prominent in the regions (e.g. Gleason 3+4 means we are dealing with a region with larger area of Gleason 3, and some smaller areas of Gleason 4.). Combining our knowledge from the biology and machine learning, one key take away here is that from the machine learning perspective the multi-class problem that we are dealing with here, is a multi-instance learning problem. Instead of receiving a set of instances which are individually labeled, the learner receives a set of labeled bags, each containing many instances. [Wikipedia]. This is the beauty of machine learning for medical imaging though, to combine knowledge from biology, physics of modalities, and also machine learning. - So, based on the above point, and based on the very heterogeneous nature of prostate tissue, learning the difference between class GS 3+3, 3+4, and 4+3 and 4+4 can be address as a simple multi-class problem properly not blindly either by using some especial arrangements as it has been previously proposed in [1] (learning deep features manifold, clustering), or having a very large dataset including enough rare samples (i.e. proper distribution of higher grades and lower grade) to perform a multi-instance learning or multi-class classification. - To back up my arguments, I just refer authors to their own results presented in Figure 3, last row. In the GT we have a case of GS 3+4 (i.e. large region of Gleason pattern 3, some smaller region of Gleason pattern 4). Authors predict large area of GS 3+4 and smaller region of GS 4+4/GS 8, which is absolutely interesting. Lets break this down. Authors predict smaller region of only Gleason pattern 4, and larger region of Gleason pattern 3 (i.e. GS 3+4). Simply what I am seeing here, is that, your features is actually working fine, the feature manifold that you have learned as GS 3+4, is actually the manifold of Gleason pattern 3. - I am fairly confident by performing some unsupervised method on top of your learn features, you will get the performance boost. ** Missing key references: [1] Azizi et al. ""Detection and grading of prostate cancer using temporal enhanced ultrasound: combining deep neural networks and tissue mimicking simulations."" International journal of computer assisted radiology and surgery 12.8 (2017): 1293-1305. [2] Azizi, et al. ""Classifying cancer grades using temporal ultrasound for transrectal prostate biopsy."" International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2016. Overall, this paper describes an extension of an existing approach and of an incremental nature. Also, authors have overlooked the definition of Gleason scores meaning from the pathological perspective which I believe results in an impaired but yet working solution. """,2,1 midl20_14_1,"""The paper proposes to train convolutional networks to segment pneumothorax by using three different parameters optimization strategies in cascade. A number of subsequent post-processing techniques are used to refine the segmentation output, namely binarization, dilation, combination of binary masks across different models via union. Training and validation is based on data from a public Kaggle competition. The authors claim their results are within 0.01% in the leaderboard, but the reported performance could not be found in the current leaderboard (pseudo-url). It is not clear what is the purpose of Figure 3, which shows performance curves for fold 0 and fold 3, and why the trends for these two specific folds are depicted here. """,1,0 midl20_14_2,"""This manuscript describes a pipeline of methods for pneumothorax segmentation. The proposed method achieved very good results, ranking in the top 0.01% in the Kaggle competition. However, the method was not described clearly. It is hard to understand how each step is exactly applied to achieve good results. A flowchart would be helpful. Also more related work and ablation study would help show the literature and demonstrate the effectiveness of the work.""",2,0 midl20_14_3,"""This paper is about pneumothorax segmentation on X-Rays, a medical condition that might be tricky to spot for the human eye. The authors seem to reach very good performances (good ranking on the associated Kaggle competition), first by tuning very aggressively their hyperparameters (random search), plus adding miscellaneous heuristics as post-processing, with more hyperparameters. The most interesting part is the ensembling performed, where they union the binary predictions of three models, as long as the models somewhat agree between them (one more hyperparameter to define that threshold). To state things explicitly, I do not like methods with many hyperparameters that are tuned with heavy grid/random search. I think this is the opposite of machine learning, and it is just a way to overfit the validation set. There is no motivation on the choice of post-processing methods, so this looks like they mashed together different methods until it worked, without any intuition behind it. The paper is very poorly written (plenty of typos, weird phrasing) and it is often difficult to make sense out of it. Few examples: ""Nowdays automatic segmentation of the organs under the risk or even various diseases,including cancer lesions and other abnormalities, became very demanding."" -- Very demanding in terms of what ? Do you mean ""in demand"" ? ""Pneumothorax may appear in case of dull chest injury, as a continuation of hidden problems with the lungs, or even more there could be no reason at all for finding(Guptaa D., 2000). "" -- I just cannot make sense out of it. Based on the poor quality of the writing and methodology, I am choosing weak reject. Despite the good results, it doesn't seem to me that the authors would be able to communicate the method to other people, which is the point of a scientific conference (as opposed to a Kaggle leaderboard). """,2,0 midl20_15_1,"""This paper reviewed a series of two studies by Moyer et al. 2018 and 2019, and provide an overview of the proposed method to create a latent representation of fMRI data and remove the effect of scanner variant. Pros: - Removing site effect from the data is an important topic. The original been reviewed here proposed an seemly effective way to achieve this task. - In the discussion, the limitation of the methods is stated. Although it would be better to propose or discuss potential improvement to resolve the reduce the effect of ""low information site"". Cons: - I found the ""overview"" of the method to be too brief. For example, in figures: - In the main method Figure (1) describing the diagram of the network, details need to be added in the legend, e.g. the meaning of each term in the loss function (now only abbreviation is given, and the reader shouldn't need to refer to the original papers to find their meanings there) - In the main result paper, there is no description of what ""Oracle"" means. A brief description of the datasets used (Connectome 30/60, and Prisma 30/60) along with the meaning of the numbers comes along with It (30/60) should also be needed. - The title is a bit misleading. The paper is mainly focusing on the overview of two previous papers, rather than an overview of the ""scanner invariant representations"" in more general sense, which is further limited by the lack of method comparison with other state-of-the-art methods used in papers mentioned in the introduction. Currently, only a comparison with a single baseline method which uses ""template-based methods"" which I found to be lacking. - I find the originality and novelty is a bit missing in this study.""",2,0 midl20_15_2,"""An evaluation of a method to eliminate the cross-site effect in diffusion MRI images is presented. Pros: - The work is framed within the state of the art and the current needs of the field, - it seems promising - The method proposed by the authors faces with relative success problems of high complexity Cons: - The description is so brief that it is complicated to evaluate how novel the work really is. - The comparison with more methods is missing, especially because the work is presented as ""Overview of Scanner Invariant Representations"" and there is only one comparison. - The authors should include some extra experiments that prove how invariant the representation achieved with the method really is. Personally, I would like to see two kind od comparisons: - Compare the usual diffusion MRI measures obtained after using the proposed method (or the methods shown if available) with the measures corrected using methods focused on the summary statistical level [1,2]. - Use a trained classifier to differentiate between sites, after supposedly eliminating the effect with the method(s). Similar to how it is done in this paper [3] [1] Jean-Philippe Fortin, Drew Parker, Birkan Tunc, Takanori Watanabe, Mark A Elliott, Kosha Ruparel, David R Roalf, Theodore D Satterthwaite, Ruben C Gur, Raquel E Gur, et al. Harmonization of multi-site diffusion tensor imaging data. Neuroimage, 161: 149170, 2017. [2] Artemis Zavaliangos-Petropulu, Talia M Nir, Sophia I Thomopoulos, et al. Diffusion MRI indices and their relation to cognitive impairment in brain aging: The updated multi- protocol approach in ADNI3. bioRxiv, page 476721, 2018. [3] Glocker, B., Robinson, R., Castro, D. C., Dou, Q. & Konukoglu, E. Machine Learning with Multi-Site Imaging Data: An Empirical Study on the Impact of Scanner Effects. in Medical Imaging Meets NeurIPS 15 (2019). """,3,0 midl20_15_3,"""A central component of multi-site studies is the correction for scanner/site biases. To correct for these biases, the article reviews the approach by Moyer et al. (2019), who proposes an unsupervised solution using invariant representations based on autoencoder. While the article is nicely written, the value add of this review over the original publications by Moyer et al. is unclear. """,2,0 midl20_16_1,"""This paper proposes a domain adaptation method for scenarios where the source and target data are paired and have identical ground truth. It minimizes the discrepancy, such as the KL divergence, between the prediction distribution of the source and target domain. Evaluation is conducted on paired and registered MRI data with two similar brain segmentation tasks. - The paper is easy to read and well organized. - The presented method is evaluated on public datasets. - Ablation studies on training stability, kernel choice, and impact of hyper-parameter are conducted. My major concerns about this paper is the general applicability of the proposed method as the defined domain-adaptation setting seems to have limited application scenarios. Also, the experimental evaluation is not comprehensive enough to support the efficacy of the proposed method. The proposed method may have limited application scenarios as the defined domain adaptation setting is relatively strict. Also, the experimental evaluation is not comprehensive enough to demonstrate the efficacy of the proposed method. """,2,1 midl20_16_2,"""This paper proposes a new Unsupervised Domain Adaptation loss for segmentation networks. The loss consists on the usual cross entropy between predictions and targets (on the labeled domain) and an additional loss based on density matching in the networks output space, which is computed between inputs from different domains. The additional loss encourages inputs from different domains to produce the same output, up to geometric transformation of the input. + The proposed method is a simple addition to a standard supervised pipeline for training segmentation networks. + Two public benchmarks are used for evaluation. + The proposed idea is well motivated and explained throughout the paper, and backed with relevant experiments. - The authors compare Mainly against AdaptSegNet (Tsai et al. 2018), which uses a different network architecture. It would have been interesting to see the same architecture used with the Density Matching loss, instead of the Adversarial Domain Adaptation loss in the original paper, to better compare both approaches. This paper has a good related work overview leading to a detailed explanation of the proposed loss for unsupervised domain adaptation (UDA). The proposed technique is backed both theoretically and empirically.""",4,1 midl20_16_3,"""The paper proposes a domain adaptation approach that aims to directly minimize the discrepancy between source and target examples for adapting segmentation networks. The setup is to adapt brain segmentation across different MRI modalities. The results are shown to outperform standard adversarial domain adaptation approaches. 1. The paper is well-written and easy to understand. 2. The paper demonstrates that using paired source-target images are beneficial to domain adaptation. 3. The proposed method works well on MRBrainS. 1. The domain adaptation setup. Domain adaptation usually refers to the marginal distribution matching between p(s) and p(t). However, in this paper, the goal is to match the joint distribution p(s,t). Due to the additional information manifested through the joint distribution, i.e., paired data, it is easier to learn better representations across different domains. In essence, the effectiveness of this model is tightly coupled with the need for paired data. 2. The definition of domain gap. One strong assumption of this paper is that the domain shift is restricted to different MRI modalities *within the same scanner and the same patient*. However, domain shift can also present in different scanners or datasets which cannot be addressed by the proposed method. It is more valuable to address cross dataset domain shift than the type of domain shift presented in this paper. 3. Evaluation. It is not clear how the evaluation is carried out. It appears that all the source data are used in training, which raises the question of what kind of generalization do we want to evaluate. There could be two types of generalization: (i) generalizing an example x_i from the labeled source modality to the same example x_i in the target modality, and (ii) generalizing an example x_i from an unlabeled source modality to the target modality. Because of the shared label structure between domains, it will be more interesting to evaluate examples that were not trained in the source domain, i.e., evaluate on a hold-out set from the source/target. 4. Baseline. The setup of this paper is different from (Tsai et al., 2018) due to the availability of the joint distribution p(s,t). It is not clear how the baseline (Tsai et al., 2018) is implemented in this paper. Is it based on marginal distribution matching where the source and target examples are shuffled, or does it have the same setup with the proposed approach where the source and target examples are always paired at the example level? 1. The definition of domain gap is restricted to paired data at the example level and does not consider cross-scanner or cross dataset domain shift. 2. Motivation and evaluation are not well justified.""",2,1 midl20_17_1,"""the authors use a metric learning approach for chromosome classification task. They use the Proxy Ranking loss to train the embedding function. They augment the embedding function with a convolutional block attention module (CBAM), which uses a combination of channel attention as well as spatial attention modules. They compare their method on other state-of-the-art methods on that dataset and perform some ablation study. The authors compare their method with other deep learning methods and also perform some ablation study especially on different components of the model. Table 2 explaining the experimental setup is quite helpful. 1- The motivation of the paper: In the abstract, it is noted: ""In addition, the results of our embedding analysis demonstrate the effectiveness of using proxies in metric learning for optimizing deep convolutional neural networks."" This puts the emphasis on the proxy-based metric learning approaches. However, there is no ablation study or comparison with other metric learning approaches. 2- There is no good motivation for why CBAM layer is used and what it does. The only explanation is "". CBAM sequentially infers two separate attention maps. To adaptively refine attention maps, both attention maps are multiplied to a input feature map."" I assume the authors are referring to the channel-wise and spatial wise attention modules. Even so, an intuition behind using CBAM in a metric learning embedding function is unclear. 3 - Better explanation of objective functions and the whole training procedure is needed. While I think this is an interesting paper, I dont think it has enough contribution to be accepted at MIDL. I think its more suited for a workshop. I would recommend the authors to provide more intuition or justification for why the attention module is useful. """,2,1 midl20_17_2,"""This paper proposes a metric learning based model for chromosome karyotyping. The model learns a proxy embedding for each class, and uses cosine similarity to classify new inputs based on the distance between the input embedding and each proxy embedding. The authors use a ResNeXt with CBAM to compute image embeddings, which are then compared to the class proxies. + The paper achieves state of the art results on a public chromosome classification dataset. + Improves previous results by a significat margin. + The authors show an analysis of the image embeddings computed by their model. My main concern with this paper is that most of the modelling decisions are barely justified. 1. The authors use CBAM ""to obtain adaptive embedding vectors"". What exactly does that mean? Adaptive to what? Why does it help the model to solve the task? 2. It is unclear why the authors approach the chromosome classification task as a metric learning problem. The paper says that ""The main advantage of metric learning is that it can exploit the semantic similarity of objects to regularize a network"" but that is not explained or backed with experiments. 3. As mention in the paper, face verification and image retrieval are successful applications of metric learning. However, these tasks can't be approached as a classification problem, as opposed to chromosome classification. What is the reason for a metric learning approach to be better than a classification model if the task can easily be framed as a classification problem? 4. Learning the proxies and applying softmax on the dot product between each proxy and each sample embedding is like learning a classifier on the embeddings, where the weights of the classifier are given by the proxies. Then, does the improvement come from the sampling strategy mentioned in the paper? Even though the results are good, the experimental section is a bit poor, and should be improved to show where does the improve in performance come from. It is not clear why the proposed model is better than previous approaches, as it is not compared with a standard model that learns a classifier on top of the CBAM embeddings. """,2,1 midl20_17_3,"""The paper proposed Proxy-ResNeXt-CBAM, a metric learning network that has an attention mechanism called CBAM and uses proxies in chromosome classification. The goal is to assist cytogeneticists with karyotyping and help them more efficiently classify chromosomes. Their best model outperforms conventional classification deep learning networks. The authors utilized the publicly available Bioimage Chromosome Classification dataset. Results on this benchmark seem promising and outperform some recent baselines. The experimental analysis seems thorough - The paper lacks original contributions. Neither deep metric learning with proxy nor CBAM was originally invented. It is thus a typical ""existing A + existing B, applied to some new C"" type of work. - The definition of ""proxy"" is very much unclear from paper. Is that just the hidden features of CNN, optimized under a cosine distance? If so, the authors over-complicated their description and may have over-stated contribution. - The motivation of CBAM is very unclear: it looks like the authors adopted that only because ""the classification performance of our network is higher"". Why does it help the proposed metric learning? Why just this specific attention, given the numerous attention mechanisms developed? None of those questions is well justified nor motivated. - The writeup is not easy to follow, and reading experience is not pleasant. Specifically, the authors seem to often unnecessarily self-repeat, e.g. ""We introduce Proxy-ResNeXtCBAM which is a metric learning-based network using proxies ..."""" Proxy-ResNeXt-CBAM which is the metric learning network using proxies outperforms ...""""Proxy-ResNext is a metric learning network that employs proxies"". See above weakness: 1) lack or original contribution; 2) the definition of ""proxy"" is very much unclear; 3) the motivation of CBAM is very unclear and not well motivated; 4) the writeup is very sloppy""",1,1 midl20_18_1,"""I was searching for relevant work but found an arXiv paper that is very similar and being reviewed under Neurocomputing Journal: pseudo-url This paper is poorly written and not well organized. It is unclear to me how the method works and the results section is also not informative. """,1,0 midl20_18_2,"""This short paper proposes exploit dependencies among abnormality labels and used the label smoothing regularization for a better handling of uncertain samples. Pros: 1. The proposed model gains 4% improvement in AUC from the label smoothing regularization compared with pure U-Ones. 2. The proposed work achieves the highest AUC for 5 selected pathologies 3. The proposed work is on average better than 2.6 out of 3 other individual radiologists. Cons: 1. All 14 labels are trained, but the model only has 14 outputs. Does that mean ""parent labels"" in the paper are labels included in the dataset? If so, is it guaranteed that parent is positive when at least one child is positive? This is the essential assumption in the adapted model (Chen et al. 2019). 2. Terms not consistent: ""we propose the U-zeros+LSR approach"" at the end of Section 2.2. But U-Ones+LSR is evaluated in ablation study. 3. Lack ablation study with the model ignoring all uncertain cases. (defined as U-Ignore in the paper)""",3,0 midl20_18_3,"""The authors present a work that classifies chest x-ray images with 14 different labels and uses hierarchical labelling and label regularization in an attempt to improve results. A leading performance on the public chexpert challenge is claimed, but while the authors may have created a nice model the claims they make in this paper are not well proven or explained. The method for using hierarchical labelling appears to follow a previously published scheme (cited) except with a different hierarchy (no details of the new hierarchy are provided). The method for label regularization is also previously published (and cited), therefore there is not methodological novelty in the paper. The authors apply their methods to the chexpert public dataset. From section 2.3 it is not clear to me precisely what experiments were carried out - were all of these models trained with/without the hierarchical labelling and also with/without the label regularization? That is not described at all. Section 3 claims that extensive ablation studies were carried out, however there is not a single table or figure to illustrate the results of these. The text provides a few AUC values but the precise gain from the hierarchical labelling and from the label regularization is unclear. What is meant by ""U-ones+CT+LSR"" - this is mentioned in results but not explained. The paper has no abstract. """,1,0 midl20_18_4,"""This paper presents a multi-label classification framework based on deep convolutional neural networks (CNNs) for diagnosing the presence of 14 common thoracic diseases and observations in X-rays images. The novelty of the proposed framework is to take the label structure into account and to learn label dependencies, based on the idea of conditional learning in (Chen et al., 2019) and the lung disease hierarchy of the CheXpert dataset (Irvin and al., 2019). The method is then shown to significantly outperform the state-of-the-art methods of (Irvin and al., 2019; Allaouzi and Ahmed, 2019). The paper reads well and the methodology seems to be interesting. I only regret the fact that this is a short paper, and there is therefore not enough space for a more formal description and discussion of the methodology.""",3,0 midl20_19_1,"""In this work, the authors present a novel approach for learning disentangled representations for domain generalization. To this end, they present DIVA, the domain invariant variational autoencoder. The key concept is to not learn one latent space, but three independent ones for the class, the domain and residual variation, respectively. After deriving the theoretical framework, the authors apply the method to MNIST data and to a data set of pathological images, aiming on malaria classification. The results, especially on the latter data set, are impressive. First of all, the presented work is extremely well written, all methodological steps are clear. The motivation is clear and the delimination to related studies and concurring methods is profund. In the evaluation, the method is compared to multiple state-of-the-art methods. The authors do not only present their method, but they also provide useful information on how to apply the general concepts of ""domains"" to typical problems in the medical field. Additionally, they provide the full code, which is great! Although the paper is full of profund studies, I am missing an evaluation or at least a comment on the scalability of the method regarding the number of domains and classes. The authors propose to interpret patients as domains, which might lead to an extensive number of them. How accurate is the model in dependency of the number of domains? As the malaria dataset contains images of 200 patients, this study might be interesting. All in all, the authors present an outstanding paper on a novel domain generalization technique as well as impressive proof-of-concept studies. The paper is very well written and there are lots of details and further information given in the appendix. Although parts of this work, namely the general DIVA-concept as well as the MNIST study, were already presented at another conference, it is not yet officially published to the best of my knowledge. Having this as well as the additional extensive study on the malaria dataset in mind, I do not see an obstacle in the previous presentation. Due to all the points stated above, I highly recommend to accept this outstanding paper and believe that it would add a lot of positive value to the conference.""",4,1 midl20_19_2,"""The paper considers multiple latent subspaces in variational autoencoder for learning domain related and domain invariant representations. As the paper points out, the ability to perform such disentanglement is a desirable property in medical imaging. However, the proposed idea is not novel, and the separation of such subspaces has been a common strategy to disentangle among different groups, e.g., FairVAE (cited by authors). If the intent is to demonstrate the idea in domain generalization for medical imaging, the evaluation is not enough (1 real-world clinical dataset and one toy dataset). One of the strengths is the clear experimental evidence of the proposed idea in the presented toy dataset and the clinical dataset. The experiments do demonstrate that domain generalization could be realized in a clinical dataset, which could be very important as in medical imaging, we often need to deal with data samples from different hospitals or demography groups. There are multiple weaknesses of the paper: 1. The contribution of the paper is not clear. If the idea is about separating subspaces into multiple groups, it is not novel as there are plenty of examples doing that. E.g., FairVAE (cited), M1+M2 model (cited). If the idea is about incorporating unlabeled data, then there are large bodies of work in semi-supervised literature. 2. The evaluation criteria for the proposed method is not satisfactory. For instance, why the proposed is not compared against LG, HEX, and ADV in the cell image dataset and done so only for the toy dataset? 3. If the intention is to demonstrate the benefits of an unlabeled dataset in improving performance, the authors should provide a comparison against semi-supervised algorithms. 4. There has been a lot of efforts in the medical imaging community to solve the problems of both domain generalization and semi-supervised learning. However, the authors fail to relate their work to such an effort. The proposed methodology is not novel. The presented experiments are mostly carried out in the toy dataset and the experiment on the clinical dataset is not evaluated properly. The contribution of the paper is also not clear. """,2,1 midl20_19_3,"""The paper addresses learning latent representations that are domain invariant within a generative model. The proposed model decomposes the latent space into three subspaces to encode domain, class, and residual variability/distribution. Annealed beta-VAE is used to encourage the disentanglement of the latent distribution. Auxillary domain and class classifiers are trained jointly with the model to encourage domain and class latent subspaces to encode discriminative representations that are predictive of domain and class, respectively. - The model follows naturally from the VAE framework with the addition of subspace-specific encoders and auxiliary classifiers. - Semi-supervised training allows the use of unlabeled data, which is crucial for medical imaging tasks. - Comprehensive experiments and comparisons to SOTA methods. - Experiments on the cell data didn't include the SOTA methods that we report for MNIST. - Some details about model training are missing. - Sample generation is poor compared to SOTA VAE/GAN generations (e.g., noisy samples in Fig 6 and noisy reconstructions in Fig 7). - Generative evaluation metrics (e.g. FID) are not reported and with poor (qualitatively) sample generation, the generative aspect of the proposed model is not evaluated. - A comparison with vanilla conditional VAE is missing. The paper proposes a straightforward but interesting generative model for domain invariance. Nonetheless, the early version that was already published (plus the weaknesses and questions) raised made me rate it as a weak reject.""",2,1 midl20_19_4,"""This submission presents DIVA, a variational autoencoder that constitutes class-label, domain, and residual latent representations, to achieve generalizable image classification performance across potential different domains. DIVA is implemented to analyze both MNIST and a malaria cell image dataset with performance comparison with other domain adaptation methods. 1. The DIVA architecture assumes three independent subspace to capture domain-invariant class-specific information forgeneralizability across domains. 2. Experiments with ablation studies (in appendices) on both MNIST and malaria cell image data are illustrative. 1. It is not clear why variational models are selected as the base architecture here. There is a similar work on embedding without variational distributions (please refer to the link: pseudo-url), which should be checked to see how these two models compare. 2. For malaria cell images, the domain is chosen to be the patient ID, which indeed determines the class label as all the images with the same ID will have the same class label. Will this be problematic? 3. When comparing with the existing methods, based on the literature review, it appears to be necessary to at least compare DIVA with DSR (Ruichu Cai, Zijian Li, Pengfei Wei, Jie Qiao, Kun Zhang, and Zhifeng Hao. Learning Disentangled Semantic Representation for Domain Adaptation.) and multiple source domain adaption (Han Zhao, Shanghang Zhang, Guanhang Wu, Jose M. F. Moura, Joao P Costeira, and Geoffrey J Gordon. Adversarial Multiple Source Domain Adaptation). Also, it is not that convincing to include the residual subspace based on the presented results, if generalizability across domain is the main interest. 4. The hyperparameters for MNIST and malaria experiments should be given in detail. It appears that for these two experiments, hyperparameters are quite different. The authors may want to discuss what would be guidelines for tuning these parameters. The proposed DIVA can be efficient and help domain-invariant learning. The reported experimental results are illustrative. More comprehensive experiments should be provided to justify the model development. """,3,1 midl20_20_1,"""The authors developed a skull fracture detection model based on faster R-CNN. The aim is to better detect fractures in small regions and reduce false positives. U-net based full resolution feature extraction network was combined with skeleton-based region proposals to achieve the aim. Detecting small regions of interest automatically particularly for skull fractures is significant. The combination of the non-learning based skeletonization method with the learning-based CNN feature extractor is valuable. The use of a CNN with full resolution feature extraction is aimed to detect smaller objects. Data splitting is problematic. 2D slices from a subject can be randomly assigned to train/validation/test sets. Subject-based splitting is recommended. Experiments were not performed to prove the proposed network is indeed improving the detection by detecting smaller regions or due to other reasons. Table 2 should be extended to include findings on only small regions and further experiments should be performed to verify the main hypothesis. The paper proposes a method to better detect small fracture regions, however, it is not clear if the proposed network does actually detects small regions or not. There are issues with data split as well.""",2,1 midl20_20_2,"""A CNN based approach for skull fracture detection in axial 2D slices is presented. The approach is based on the Faster R-CNN in combination with a skul skeleton region proposal. The algorithm results in bonding boxes containing the detected fractures. The method is trained and evaluated on 45 head trauma patients with in total 872 slices with a fixed portion used for training testing and validation. The proposed method is compared to just using the Faster R-CNN. The achieved precision is 0.65. The motivation of the paper is clear and the problem addressed is clinically important and technically difficult. The approach is more or less clearly described and compared to a direct application of the Faster R-CNN network. Fractures are typically seen in some orientation better than in others. A 3D approach would be would pose the least limitations on fracture orientation, if a 2D approach is used, the authors should discuss why the application to axial slices is sufficient. Regarding the proposed method, it is not clear to me, why the multi-scale feature map is created for the whole image but later only used for the patches selected by the region proposal step. It is also not completely clear to my why the Roi-align step is necessary. The evaluation should not only present the achieved precision but also sensitivity and specificity. In general the paper is of interets for the reader since and important and diffcult problem is addressed with some success. But there are also severe limitations in clarity and evaluation. Therefore a week accept is recommended""",3,1 midl20_20_3,"""The authors propose to detect skull fracture using a modified R-CNN approach. In the proposed architecture, the candidate boxes are computed from a simple skeletonization procedure instead of an RPN. Simultaneously, U-Net architecture is used to extract features. The proposed method is evaluated on CT scans from 45 head trauma patients and compared with the manual annotations by the radiologist. They obtain an AP of 0.68. 1) The idea of generating candidate boxes using skeletonization is interesting. 2) Utilized encoder-decoder architecture to extract features from the skull fracture images. This feature extraction module used along with boxes improves the accuracy of detection. 1) The paper is poorly written with several grammatical errors and missing sentences. For instance, page 5, first line "" Due to that, the candidate boxes......"" . It is not clear what is the author is referring to by the phrase ""Due to that..."". Page 7, the first line ""average precise (AP)..."" should be corrected to ""average precision"" 2) The equations are not explained thoroughly. For instance, in equations 3 and 4, what is the need for an exponential (e) for computing the predicted box width and height? . In equation 7, what are Lc and Lr? What is the reason for weighing Lr by 10? The paper is very poorly written and very difficult to follow. Besides, few critical equations lack a proper explanation/justification. The authors claim that the full resolution feature network extracts small features. However, this claim is neither justified theoritically or experimentally. """,1,1 midl20_20_4,"""This paper describes a variant of the Faster R-CNN object detection method for detection of skull fractures in CT scans. Instead of considering region proposals across the entire image, following a regular grid, the paper proposes to run an ad-hoc edge detection and skeletonization method on the image to obtain a very rough segmentation of the skull and to generate region proposals in an unsupervised way by placing an array of differently sized region proposals across the skull. The method works on 2D slices of head CT scans and is quantitatively compared with Faster R-CNN and qualitatively with another skull fracture detection method. The proposed idea is an attempt to include prior knowledge about the structure of interest (bone has high HU values in CT, the skull has clear edges with respect to other structures in the image) in a well-known object detection framework. This is something that makes a lot of sense to me for object detection in medical images, where we typically have such prior knowledge, and where such a simple strategy can avoid implausible false positive detections (at least to some extent). The paper is sometimes a bit hard to follow, the quality of the writing could overall be improved. The presented method detects skull fractures only in 2D while the scans are 3D. The evaluation therefore also did not consider how well the detection results of neighboring 2D slices agree in 3D. A severe limitation seems to be the way the training/test data is split - it appears as if the 100 test slices were just randomly sampled from all annotated slices. It is not clear whether therefore neighboring, or almost neighboring, slices from the same patients were in the training and the test sets. Especially since fractures will often be visible in multiple slices, this would be a severe issue of the evaluation. This is a somewhat hard to read paper with limitations in the experimental setup, but with at its core a small but interesting idea. If the authors are able to incorporate some of the feedback, it might make for a decent contribution to MIDL 2020.""",3,1 midl20_21_1,"""The authors present a method to assess the similarity between pairs of images for large datasets that relies on the scale invariant feature transform. Both the method and the results described are interesting and of high quality. However, this work does not seem to be in line with the topic of the MIDL conference as no deep learning seems to be used in the analysis.""",2,0 midl20_21_2,"""Interesting short paper that proposes a keypoint-based morphological signature for large-scale neuroimage analysis. Unfortunately it is not related (or the authors did not make any connection) with deep learning. This is the reason why the title of my review is ""out of scope paper"". It probably fits better on another conference.""",3,0 midl20_21_3,"""This paper presents a method to create a neuroimaging signature of an individual scan. The methods uses SIFT features and a kNN classifier. This paper validates the method based on 8152 scans. The authors analyse the correspondence between the signatures of all pairs of scans to analyse their correspondence. The results show the overlap measurement of the signatures is different between pairs of scans from the same individuals, from twins, from siblings and unrelated scans. I find the analysis and its results very impressive and interesting. However, I am wondering if MIDL is the appropriate venue to present this, as no deep learning methodology is used. """,3,0 midl20_21_4,"""I actually reviewed the journal version of this. It's an excellent paper which highlights the very important role that traditional feature based computer vision still has in medical imaging. Especially for tracking and comparing cortical features which deep learning is yet to prove it can do. Minor comments: I don't think the abstract does a good job of highlighting the key impact of the method. There is a lot of technical jargon. I would recommend re-summarising in plain english. In general the abstract is lacking a high level description of the motivations and the approach.""",4,0 midl20_22_1,"""1. Overall, the approach lacks originality, as multi-resolution feature fusion has been explored by the deep learning community. 2. It is difficult to see 2 inputs and 2 outputs in figure 1 as mentioned in section 2. 3. The fourth pooling layer is not connected to any layer/block in figure 1. 4. It would be better to have names of the authors along with the method names in table 1. 5. The proposed approach is best only for the segmentation of esophagus and not for other organs. 6. It is not clear whether the word ""images"" in section 4 denotes 2D slices or 3D scans. """,2,0 midl20_22_2,"""This paper presents a unet-like architecture that is enriched with skip connections from lower levels of the downsampling/upsampling paths towards upper levels. The task the authors attempt to solve is multi-organ segmentation from CT scans, and results are comparable or better than the state-of-the-art, according to a nice evaluation (the test set of a grand-challenge). I believe this is a solid short paper and I support acceptance. I would like however to see more technical details about what is the exact way in which connections are built in this network. Figure 1 could benefit from a better written caption, in this sense. Minor comments: 1) In the first page, you probably wanted to write ""generative"" instead of ""generational""? 2) Could you please clarify if the input/output of your architecture is volumetric or bidimensional? You first mention that you implemented a volumetric OAR segmentation method, but later in the text you say you had 21,000 256x256 images, which sounds like you dealt with 2d images. """,3,0 midl20_22_3,"""#Summary This work proposed a new deep-learning architecture for the segmentation of normal organs at risk in thoracic CT data. The authors introduce residual connections from downed scale to upper scales for skip connection of U-Net architecture. They explained these residual connections between down-scaled feature map and upper feature maps achieve multi-resolution feature learning of volumetric data. #Pros Performance comparison among the proposed method and the three best-performance methods in the AAPM grand challenge by using two independent datasets for train and test, respectively. - The proposed method achieved 0.05-0.13 higher segmentation accuracy (dice similarity coefficient) than the other three methods for esophagus segmentation. - For the other organs except esophagus, the proposed method achieved 0.01-0.02 less or equal segmentation performances than other methods. It looks comparable. #Cons No theoretical and reasonable explanations about how to select the connection path in Fig. 1 There are several options to connect different scales. In Fig.1, the paths from down-scaled features to upper-scale features exist even in up-convolution parts of U-Net. It looks strange for me, because feature extraction might be done in encoding part, that is, the former part of U-Net before up convolutions. Why? What kind of operations the architecture adopted is unclear for the handling of the different size of feature in residual connection. How to upsample is not presented. Just zero padding, nearest neighbor interpolation, bilinear or cubic interpolation, or Gaussian pyramid? """,2,0 midl20_22_4,"""The paper is very well organized. This is a well written paper and everything is clear. Paper is easy to follow with clear motivation about the method. The dataset used in this study is large, from multiple sites and the performance of their segmentation network is interesting. However the weaknesses of this paper is that the authors didn't mention the effect of different reconstruction kernels, slice thickness or the effect of contrast injection in their model.""",3,0 midl20_23_1,"""The authors present an efficient procedure for doing Neural Image Compression. The idea is to tune the lower layer features by biasing them towards some downstream classification tasks. This retains useful discrimnative information while compressing the content. This has been observed across come vision tasks especially in scene parsing, and the paper provides application in medical imaging as well. This is a well written paper. Clear explanations in terms of what is needed to address the WSI dimensionality issue. Evaluations seems to suggest the benefits of the proposal. End to end system. Has good impact. From the perspective of the technical presentation and motivation there is not mot much weak aspects here: What was the rationale for using these 4 specific tasks? Anything unique about them that would drive the NIC? In Figure 3, and section 4.3 what was the rationale for the jump from 3 to 4 tasks? The per task accuracies are very high, so I wonder if the tasks themselves are reasonably easy when trained independently i.e., the proposal is task sensitive. What do you mean by not shared in section 3 first para? The tasks are trained together correct with the same encoded representation? So they are shared? See above: Specifically --- Clear explanations in terms of what is needed to address the WSI dimensionality issue. Evaluations seems to suggest the benefits of the proposal. End to end system. Has good impact. """,4,1 midl20_23_2,"""Histopathology image analysis is difficult because histology images usually are large and there is a substantial ""noise-to-signal"" ratio. As the authors mentioned, there is a need of efficient sampling methods to reduce the dimensionality of these images. The paper main hypothesis is that multitask supervised learning might allow to learn general and meaningful features. - The paper address a complex and increasingly relevant topic in histopathology image analysis. - The final supervised tasks are clinically relevant and results using the compression technique showed better performance in those final supervised tasks than the original NIC formulation or other approaches without compression. - An analysis of which of the used supervised task in MTL are contributing the most in the final representation is missing. - More information about the models performance on each of the different supervised tasks is needed. This is a really insightful paper demonstrating the ability of MTL framework in obtaining meaningful and general features for histological image compression. However, the paper contribution would be greater if additional details and evaluation regarding the different tasks used on the MTL framework are presented.""",3,1 midl20_23_3,"""In this paper, the authors aimed to improve the representations learned by Neural Image Compression (NIC) algorithms when applied to Whole Slide Images (WSI) for pathology analysis. The authors extended unsupervised NIC to a multi-task supervised system. A hard-parameters sharing network was presented with a shared, compressed representation branching out in task-specific networks. The authors evaluated the quality of these representations on multiple tasks, illustrating the added benefit of their multi-task system and the utility of using multiple tasks to supervised the feature extraction. * This is a very well written paper. The introduction and description of the state of the art, in addition to the main limitations of popular algorithms is very clear and interesting to read. The experiments are clearly explained and the results are well presented. * The decision to supervised the feature extraction in a multi-task setting is good and makes sense. Multi-task learning can extract a shared representation that is generalisable and this is evidenced in the results in the TUPAC16 set. * Good and convincing results when compared to competing methods * Strong validation * It is a shame that the Kaplan-Meier estimator was not repeated for all baselines to further illustrate the strength of the multi-task features * There are many more TUPAC16 results [pseudo-url. pseudo-url] yet the presented method is benchmarked only against 3. It would be helpful to put the results in context with all other methods such as automatic and semi-automatic methods. Moreover, is there is a reason you did not validate on all TUPAC16 tasks? The is well written paper with a clear description of the state of the art and the reasoning behind the presented method. The method is well explained and the validation is strong with convincing results versus state of the art methods. The work also raises some interesting points regarding multi-task training for pathology and with further work could be a good paper.""",4,1 midl20_23_4,"""In this work, the author trained image compression using the multitask NIC and evaluated the obtained representations in two histopathology datasets that target imagelevel labels. First, they modeled the speed of tumor growth in invasive breast cancer. Second, they predicted histopathological growth patterns and the overall risk of death in patients with colorectal metastasis in the liver. Paper is easy to follow with clear motivation about the proposed method. Method is well validation with two different types of data. Results on the evaluated metric shows the usefulness of the proposed method. Limitations of the work are clearly noted. Implementation details are not clear and should make the paper easily reproducible if the dataset is made publicly available. The method should be expanded more to explain different experiments that have been used in this study. The discussion part is very short without any rational that why their method can predict risk of death in colorectal and liver metastasis patients. the authors accomplished their results and target associated with the particular goal being rated. results met all standards, expectations, and objectives. For overall performance, expectations were consistently met and the quality of work overall was good.""",3,1 midl20_24_1,"""The paper demonstrates a CNN-based segmentation network for non-contrast CT images. The performance of the network is evaluated with/without post-processing and compared to expert annotators. The results indicate that the proposed method produces results positively correlated with the experts and the segmentation accuracy, measured as Dice score, can be improved by post-processing. The authors point out that an interesting point that NCCT images are common in stroke imaging while segmentation methods are mainly proposed for MRI images. However, it would be more helpful if the authors could give some advantages of NCCT in clinical use, in terms of cost, accessibility etc, and discuss why methods are more often evaluated on MRI images, could it be that NCCT images are more difficult to acquire, or public datasets are scarce to promote more research, or annotation on NCCT is less accurate due to its subtle contrast change? The paper reads more like an evaluation of existing method, as the segmentation method is mostly built on DeepMedic. The correlation analysis to the neurologists' annotations is very interesting. It would be more interesting if more segmentation methods can be evaluated in this way and compared with one another on NCCT images and present the results as a baseline to promote further research. """,2,0 midl20_24_2,"""The presented paper describes the application of the pre-existing ""DeepMedic"" codebase for segmenting ischemic stroke lesions in non-contrast CT images. While the authors selected ""both"" as paper type, I do not see any methodological contributions. There is some standard pre-and post-processing; the preprocessing is only described in the appendix and mostly consists of skull stripping and HU range selection, the post processing is described in more detail but only consists of hole filling and removal of tiny isolated components. On the positive side, I like to point out that the authors did not just present average dice coefficients, but performed statistical tests and decided to describe the performance based on quantiles. Furthermore, they had two observers annotate the dataset, and the dataset comes from 24 sites and shows quite some variability in imaging parameters. So, I believe the final evaluation results to be fairly realistic, much more so than with the average paper. Critically speaking, the post processing applied includes a fully-connected CRF; at least that's included in DeepMedic to the best of my knowledge. It's interesting that the CRF cannot learn itself that isolated components of up to 3 voxels should be removed. Speaking of which, I found it strange that the unit voxels is used for removing cruft, but the hole filling threshold is given in ml. When the main paper merely stated that the ""datasets were preprocessed identically to ensure data consistency"", I thought that referred to the voxel size, since that varied considerably in the dataset. (Yet, no resampling was applied.) It might have been better to at least hint at the kind of preprocessing / harmonisation. I think masking the CNN's input both during training & inference would've made sense, but it is not mentioned. The conclusion that ""[the strong correlation] suggests a potential application of the model for volumetric assessment of follow-up lesions"" is a bit far-fetched; for instance, a lesion annotated to be around 6ml by human experts is segmented with >100ml, and a lesion with 40ml is underestimated to be only 10ml. Overall, I think it is a nice application paper. It does not present any technical novelty, but the evaluation is sound. (The authors also announce to release the model together with the final manuscript. Only the dataset is obviously private.) There is a duplicate ""were found"" in the last sentence of the first paragraph in section 3.""",3,0 midl20_24_3,"""Pros: * The segmentation output from the DeepMedic framework was corrected with post-processing, such as connected component analysis and hole-filling. * The median dice of the reported method show improved performance over two manual graders. The correlation of lesion volumes between the human graders and the CNN-based segmentation was statistically non-significant. *Further, the authors have performed extensive statistical analysis to justify the significance of the method. Minor comments: * Do NCCT datasets contain 272 samples (not dataset)? Are 204 samples (not datasets) used for training? * In section 3, No significant difference ""were found"" is repeated twice.""",3,0 midl20_24_4,"""Authors present their work on applying DeepMedic to segment lesions on NCCT images. The topic of the work is relevant and interesting. The evaluation on two data sets from hospitals not involved in training is a major strength of this work. The major weakness is the limited methodological novelty, since this is merely a validation study. As I said, the evaluation is very strong, because authors used images from two hospitals not involved in training. This gives confidence that the performance of this method is reproducible in other studies. This is a major strength and unfortunately not very common in this field. The ratio of train/val/test data is quite skewed towards train/val: 204/48/20. Do you really need 204+48 images to train DeepMedic to achieve this performance? It might be very interesting to see whether the performance on the 20 test images changes when using less training data. If this method can be trained with less data, it would make it even more attractive to use. 20 test images, although from different hospitals, is still quite limited. Personally, I would have opted to include much more test data and use less training data. Perhaps in future work authors can extend the test set and demonstrate performance on a larger data set. It might be interesting to specifically look at small lesions? It is usually not very hard to detect / segment large lesions and Dice is always high for larger lesions. Automatic solutions might be key in finding small lesions, since these are also hard to spot visually by an observer. Can authors comment on the performance of small lesion (e.g. below median size)? Please report the inter-rater dice also in the text, I could only find it in the figure.""",3,0 midl20_25_1,"""The author innovatively combines the CNN and GCN together to improve the segmentation results of the CNN. According to the author, refinement strategy is not only limited to specific underlying segmentation model but also generalizable. The segmentation is always a challenging problem needs continuous improvement. The authors provided enough experiment results with multiple use cases. Some minor details might be provided to address further concerns from readers The combination of proposed approaches in the paper is innovative The paper is well written and background introduction and discussion is adequate The experiment design is well considered with comparison. The abstract is less informative, not sure if its due to word limits. But could be better organized to reflect the imaging modality, dataset and results detail. Also, there is no definition of abbreviation. Introduction part, the abbreviation is defined multiple time, for example CNN For formula 3 the indicator definition is confusing, is it just a thresholding function. Not sure why use square bracket rather than If the author could provide more detail about connection establishment of the between nodes that would be helpful. For example, why perpendicular immediate neighbors need to be connected. What if the voxel is on boundary so the connected two nodes might fall into different labels. Also, why choose randomly 16 nodes in the graph. Does the distance between node and node original intensity will affect the weight of the connection? The overall paper is well written and detailed. The approaches proposed in the paper is innovative in its specific domain and use case. Some details about the graph definition, parameter choosing and abbreviation definition are minor issues could be addressed.""",4,1 midl20_25_2,"""Graph convolution network (GCN) based refinement of organ segmentations in 3D is proposed. Uncertainty information is derived using Monte Carlo drop-out on the U-Net predictions and a graph using the uncertainty and entropy information is constructed. Some of the nodes in the uncertainty graph are then labeled to indicate their uncertainty level. Uncertainty levels of the unlabeled nodes are inferred using a standard GCN. Performance of the method is compared with baseline U-Net and the refinement using uncertainty is compared with a CRF based method. Experiments are conducted on two datasets for 2d segmentation. + The idea of constructing uncertainty graphs and partially labeling them is an interesting contribution + Use of GCNs in this setting is novel + Relevant comparision with the fully connected CRFs for the uncertainty refinement strategy + Experiments on two datasets show reasonable improvements + Thorough discussions about the influence of different parameters (tau, number of samples) are presented 1. The method is motivated for 3D but both experiments are presented using 2D Unet. Why was this choice made? Using a 3D segmentation model would be more natural in this setting. 2. The selection of node neighbourhoods by adding additional 16 neighbours beyond the nearest neighbours is interesting. However, what is the motivation to do this? How do these 16 random neighbours contribute to the GCN? If no attention was used when performing the GCN updates these additional neighbours might hamper the learning. 3. Are the results in Table 1 significant? 4. The discussion in Sec 3.4 and Table 2 is interesting but as the authors point out the standard deviation is large. Do your conclusions hold about the improvements due to GCN? Again, are these bold face numbers significant improvements? The idea of using GCNs to refine segmentations based on uncertainty is certainly interesting. However, a couple of key ideas (why 2d segmentation model, choice of random neighbours, significance of results) need to be further clarified.""",3,1 midl20_25_3,"""The authors proposed a GCN-based post-processing step for segmentation refinement. Segmentation network can provide useful information about potentially mis-classified elements and the later GCN can be trained in a semi-supervised way to refine the segmentation. Compared with CRF, the GCN refinement can have 0.6% and 1.7% Dice improvements on Pancreas and Spleen dataset, respectively. More improvements are reported when use less samples for training and thus CNN does not generalized adequately to the unseen testing data. 1. The idea of GCN refinement strategy is novel and interesting. 2. The authors provided detailed ablation study on GCN refinement by reducing training sample size, changing threshold 3. The authors gave comprehensive discussion about deep insights behind the proposed model which could help readers understand their model well. 1. It is not clear how its flexibility to work with other advanced segmentation models. The authors used 2D U-Net as an example, is your model available to work with 3D models ? 2. The focus of this paper is about this GCN refinement method which may limit its use in reality. I think recent segmentation models can have better results than the authors reported using 2D U-NET + the proposed refinement. It is better to compare with more recent segmentation models. 1. The use of GCN refinement is novel even though it is not clear how this can work with more advanced segmentation models. 2. It is very helpful in the task of using 2D models for segmentation. The authors prove it can have better refinement than widely used CRF.""",3,1 midl20_25_4,"""In this paper, the authors propose a two-step segmentation refinement algorithm. In the first step, an uncertainty analysis is performed on the predictions from a CNN network. The Monte Carlo dropout technique is applied to CNN prediction to obtain the uncertain regions. Next, a semi-labelled graph is built based on intensity, entropy, uncertainty and the output of CNN. This constructed graph is then used to train a GCN and further refine the segmentation. 1. The authors validate the proposed method on two CT datasets segmentation pancreas and spleen. 2. The paper compares the proposed work against a 2D CNN (UNet) and a CRF based refinement method. 3. The paper also presents the results describing the effect of training samples on the trained CNN. 4. Extensively evaluate the refinement method for different threshold values. 5. Evaluate the effect of the graph construction process with the final segmentation obtained. Minor 1. What is the computation gain over the CRF based refinement? 2. Is the training end to end? If not, what is the stopping criteria for the CNN? Since GCN seems to refine the segmentation better, did authors consider early stopping the CNN and train a larger GCN model for refinement? 3. Increase the legends of the figures (1 and 5) and text size of the axis. Hard to read. The paper is well written. The authors justify the claims made in the paper. The proposed method is evaluated on two datasets. The effect of the different threshold values is studied. The results are compared with a dense CRF method. """,4,1 midl20_26_1,"""Overall, the quality of the paper is fair. It is well-written, well-structured and easy to read for someone without knowledge on IVF and ART. The method is compared to five embryologists and results clearly shows that learning directly from the clinical outcome outperfoms embryologists by a large margin. The main weakness of the paper is in the methods section. The methodological novelty seems insignificant. Plenty of works combine autoencoders with LSTMs. I suggest you either argue for the novelty or remove the claim from the paper. The methods section lacks details for reproducing the work. These must be provided in a supplement to allow reproducability. If you want your work applied in clinics, this is much more important than improving the results. In the methods section you describe training an autoencoder on unlabeled data, then training an LSTM using autoencoder embeding and embryologist grades. As I read it, UBar is the same LSTM just trained on clinical outcomes. You do not report results for the embryologist trained LSTM, so what do you use this LSTM for? If you dont use it, remove it from the section. If you do use it, you cannot argue that you learn from ""a small number of labeled samples"" as done in the final paragraph of the paper. In the discussion you almost exclusively focus on the work by Tran et al and why comparing with that work is unfair. Instead, you should have made the comparison and highlighted the differences clearly. What is interesting is not who is better, but how, and how well, the task can be solved. You argue that including embryologists decisions in the prediction is an easier task. I am not convinced. In your case, you train on data that has already been filtered to only include positive decisions by embryologists, otherwise the eggs would not have been implanted. It is not obvious how to best get around this issue, since the first embryologist screening probably has false negatives, but you need to take it into account. Your statement about AUCs and training sizes is either obviously correct or obviously wrong, depending on interpretation. The only way training size can influence AUC is by influencing the training of the model. It is quite well known that more training data, in general, results in improved performance of networks. This holds for all the popular performance measures. Having said that, if the model predictions does not change, then AUC does not change. Maybe you meant the size of the test set? In that case, it is the ratio of positive/negative that is relevant. Regardless, trying to paint others work negatively by arguments to some general issue with established performance metrics is disingenuous. If there is an issue with Tran et al you should state it clearly, if not, you should accept their results. A mior nitpick: You define all abbreviations except for UBar. It is fine that you give your method a name (although I personally dislike it), but a bit weird not to explain it. Finally, I would very much have liked to to see a frame from one of the videos. I am aware of the page limitation, so maybe MIDL should allow an extra page solely for an image of the raw data.""",3,0 midl20_26_2,"""The authors present a deep learning method for predicting embryo implantation probability, based on time-lapse videos acquired during IVF. The authors claim a substantial improvement relative to an expert panel of embryologists, as measured by AUC and predictive values. The method is assessed using 10-fold cross validation on 272 videos with known implantation outcome, and 4,087 videos with panel grading. There is some pre-existing similar work in the literature - most similarly Tran et al 2019. This is appropriately cited by the authors, who describe subtle differences compared with their work. Despite this similarity, replication of a solution to the general problem on different data (using the authors' method rather than that of Tran et al) counts as sufficient originality. Significance of the work seems high: the problem is clearly important, and the potential for improvement over current clinical methods seems substantial. Quality of the work seems high, particularly given the short format: the authors present a convincing method and then validate it quite thoroughly. Clarity is good, although it would be beneficial to introduce the work of Tran et al earlier, and better explain the differences in the authors' work. The authors also do not explain their method in great detail, although I feel this is understandable given the short format. On balance I am impressed by this paper, and strongly believe it should be accepted. Pros: * Well-motivated. * Well-described. * High quality validation. Cons: * Fairly similar to pre-existing work, although I think this is perfectly fair and the work still has significant originality. * Slight lack of clarity in differences vs previous work. * Lack of detail concerning the authors' method. * Confusing description of the dataset: how were the non-labelled videos used?""",4,0 midl20_26_3,"""The authors present a CNN+LSTM network architecture denoted as UBar for the prediction of embryo implantation in IVF. The main task to solve with the machine learning algorithm consists of analyzing a time series of images to make a binary prediction. The results presented look good when comparing to other published works. However, I find the content of the article poor in terms of motivation to use this specific architecture. The authors are focused on the limitations of using user-defined parameters while there are already some works that use machine learning methods. Hence, I would strongly recommend them to include further information in this direction that supports their work. The architecture is defined as ""UBar"" but it makes no sense to me due to the lack of details. I would recommend them to include a further enough description of the network so any reader can understand what they did. Additionally, the code for the work or a comparison with other methods is not available, which makes all of it difficult to reproduce or evaluate the reported values. """,2,0 midl20_26_4,"""In this work, the authors present an LSTM-based model to predict the probability of successful embryo implantation based on time-lapse images of the developing embryo in an in-vivo environment. The results are very promising and discussed very well, the work is eagerly embedded within the field. The overall impression of the paper is very good. The scientific background is profund and the description of the methods is clear. One minor concern is the description of the data set used, as I got a bit confused by the 8,789 data points of which only a subset is used as only these were labeled. Additionally, the presence of two different sets of expert panels was a bit confusing as well. These parts could be explained more clearly. However, the authors put a lot of effort into creating a sufficient data set with a comparison to the clinical state-of-the-art and hence deliver a very good and reliable study. I recommend to accept this paper for MIDL 2020 and am sure that there will be interesting discussions during the conference!""",4,0 midl20_27_1,"""This paper presents a method for out-of-distribution sample detection. They use several convolutional neural networks (heads) to improve the performance of such a task. Using a set of models or ensemble learning is not a new idea for improving the detection rate. Furthermore, it seems the proposed method has achieved acceptable performance in the expense of increasing the computational cost. The paper addresses a challenging problem. They have exploited several models for detecting the out-of-distribution samples. Experimental results show the proposed method is able to detect the out-of-distribution samples. -The novelty is very limited. Using several models (here heads), instead of one, is not a new idea. -The experimental results are incomplete. A comprehensive discussion and also comparison to the state-of-the-art method are necessary. - Multi-head increase computational cost. An analyzing of this term and comparing it with the previous method is needed. -It is not clear, why such a method works better than one-class classification. It seems the generality of such a method on new samples (unseen samples) be better than this method. The novelty of paper is not enough. Experimental results should be improved The complexity of the method should be analyzed. The paper ignored mentioning all important previous methods for out-of-distribution detection. """,2,1 midl20_27_2,"""The paper deals with an important topic: uncertainty quantification in DNNs for medical diagnosis, in particular, digital pathology. It is well motivated and written. However, it is only an application of multiple hypothesis prediction (MHP) models to classification problems in pathology. In this regard, the study is missing out on novelty. But, I liked the approach and how a multi-head network can compete with other more demanding methods like Deep Ensembles. - Decent problem specification and - A competitive model with less computational demands - Actually multi-head model performs better than others. - Comparison with other methods, good evaluation and promising results Despite its simplicity and other benefits like improved performance with less computation compared to Deep Ensembles, head diversification introduces an additional hyperparameter. To avoid the mode collapse at the head level, dropout is also added, which adds more hyperparameter(s): dropout rate. Even though I could live with a few or couple of additional hyperparameters, especially considering the aforementioned benefits, it might be some sort of nuisance to deal with additional hypers in other applications. But not too bad. A decent application of multiple hypothesis prediction (MHP) models. Considering the computational gains on top of performance, it is a promising work. However, there some issues pointed at above. If they are clarified, then I would like to see this work accepted. """,3,1 midl20_27_3,"""The paper introduces the use of multi-head neural networks for uncertainty quantification on digital pathology. The paper uses a meta-loss that shall induce diversity in the different heads. The proposed method achieves competitive performance and is capable of detecting outliers on the digital pathology task. The paper applies a very resource efficient method for uncertainty prediction to digital pathology. It is simple to implement and shows competitive results on the target task. The method provides improved uncertainty estimation and out-of-distribution detection than a baseline model. Even though the paper has an extensive validation, I am very cautious of the results from the ensemble baselines. The paper uses subsampling of the original dataset to encourage diversity of the neural networks. However, Lakshminarayan, et al. already showed that ensembles without subsampling but different initialisations perform well. This and the presented behaviour that an ensemble of 10 performs worse than an ensemble of 5 makes me question the quality of the baselines. The authors mention simple data subsampling according to Karimi, et al., however, Karimi, et al. seem to perform a data sampling strategy aimed at improving performance on difficult examples, which is very different from simple subsampling. Further, the authors hypothesise that the improved performance of the multi-head approach is caused by increased diversity of the predictions which is induced by the meta-loss. I would improve the quality of the paper if the authors would report metrics like 'disagreement' of the predictions as proposed in Lakshminarayan, et al. This could also remove uncertainties of why the M-heads- baseline with 5 heads performs on par with M-heads with meta-loss. Further, it is mentioned that the meta-loss is practically infeasible to be used for ensembles. However, it should be possible to decrease the batch size and use train multiple models in parallel with the mentioned meta-loss. Both those comparisons would greatly improve the insights into the proposed and alternative methods. I would expect ensembles with that meta-loss to perform better, due to better coverage of the parameter space. However, the multi-head approach has significant resource benefits. I believe that the paper proposes a valid and interesting method for uncertainty estimation and out-of-distribution detection on digital pathology. However, the paper does not introduce methodological novelty and misses important analysis in understanding the behaviour of the method and relevant baselines.""",2,1 midl20_28_1,"""This paper try to reconstruct images with motion blurring from sinogram. It trains two networks, one to estimate the blur kernel, one to reconstruct clean image. They experiment their method on toy examples only, without any experiment on PET/SPECT/CT images, even synthetic ones. It might be a good paper in the future. But current manuscript is way too preliminary. It respects the physics of image acquisition, and uses sinograms as input. It uses deep network, so the inference should be very quick. It directly estimate blur kernel from blurred image using a CNN, which as far as I know, is not an easy task. If it could work well on real data, then the proposed method would be a breakthrough. Unfortunately, the authors only test on toy examples. Toy examples have absolutely sharp edges, which are very different from real data. Experiments on real data is absolutely necessary. 1. The proposed method is only experimented on toy example. 2. I understand paired training data from real PET/SPECT/CT is not available. Yet the authors did not even test on synthetic motion blur on real CT. Such experiment does not have high requirement on data. It only requires real clean CT images from any public dataset, then apply a blur kernel and add some noise. I don't understand why such experiment is not done. 3. If such experiment was done, this paper would be acceptable. I understand paired training data from real CT is not available. Yet the authors did not even test on synthetic motion blur on real CT. Such experiment does not have high requirement on data. It only requires real clean CT images from any public dataset, then apply a blur kernel and add some noise. I don't understand why such experiment is not done. If such experiment was done, this paper would be acceptable. It might be a good paper in the future. But current manuscript is way too preliminary. """,1,1 midl20_28_2,"""The authors propose a motion correction model using experiments with synthetic data. The synthetic data consists of Gaussian blurred binary masks of various generated structures, and transformed to a sinogram with the Radon transform function. Two networks are proposed, the first learns the Gaussian kernel from the corrupted sinograms, and the second learns to reconstruct an uncorrupted image from the corrupted sinograms. - The paper is well written, with clearly presented methods about the synthetic data generation pipeline, and models used for Gaussian kernel estimation and uncorrupted synthetic image recovery. - It is a clear proof-of-concept that motion, modeled as a Gaussian blur of a binary image and transformed to a noisy sinogram, can be corrected for using deep convolutional neural networks. - While the paper is clearly written, there is very limited applicability to relevant imaging modalities (e.g. computed tomography) given: (i) limitation of the synthetic data as a proxy to real cardiac CT images; the authors generate synthetic 2D images as binary masks of different structures. A model which performs well on corrupted binary structures says little about whether it would perform well on real image data. (ii) limiting assumptions about the representation of motion artifacts in the image and sinogram - Gaussian blurring of binary images and transforming to a sinogram is far from a reasonable approximation of the complex manifestation of motion blurring observed in cardiac CT. For example, motion corruption in cardiac CT is typically the result of cardiac contraction during acquisition of the sinogram. This does not affect the resulting reconstructed image as a simple Gaussian blur applied uniformly across the image. Most other motion in the field of view is minimal relative to CT acquisition time. No medical image data was used, and the synthetic experiments proposed have very limited applicability to real use cases - specifically generated binary structures have very little resemblance to real anatomical images, and the approximation of motion as a Gaussian blurring is unrealistic for the relevant medical imaging modalities. """,1,1 midl20_28_3,"""In this paper, motion during aquisition is simulated by (1) blurring of image data using a symmetric Gaussian filter and (2) discrete Randon transformation of the resulting blurred image data. Neural networks are trained for the tasks of filter estimation and artifact-free image reconstruction based on a set of synthethic shape masks and corresponding perturbed sinograms. Motion correction is indeed a relevant problem in medical imaging. However, the proposed approach is not suitable to solve the given task (please check the first point in weaknesses for a detailed explanation). - The paper is based on the assumption that motion-perturbed sinograms can be generated by (1) blurring of corresponding image data using a symmetric Gaussian filter and (2) discrete Randon transformation of the resulting blurred image data. However, this assumption is not correct. You do not capture the time-dependency of motion in your model. Lets for instance have a look at CT raw projection data. Each projection view (i.e. acquired raw data belonging to the same gantry angle) corresponds to a specific motion state. However, these views are not consistent in time. During reconstruction, motion states are mixed together leading to motion blur in the reconstructed images. The appearance of the motion blur thereby depends on the angular reconstruction range (i.e. also in the image domain a symmetric Gaussian filter is not appropriate to mimic the effects of motion during acquisition). A model for generation of motion-perturbed sinogram could, for instance, be generated by generating image data in different motion states and time-dependent forward projection. The performed approach of generating a blurred static image volume with subsequend forward projection is therefore not sufficient. - You do not accurately split your training and validation data. I would highly recommend to assign all samples belonging to a specific shape mask and a specific filter size for validation. By your approach of random data separation on the augmented data base, you are not able to investigate whether your networks really generalize or just memorize the shape masks and different filter sizes. I have severe doubts that your trained networks will generalize to anykind of unseen image data or filter kernels. - You state ""Advantages from our method are: once trained,our model is agnostic to any modality, subject to the assumption that timing resolution ofthe modality is much lower compared to the motion frequency, which causes the motion toappear as a blur"". However, you do not address that different modalities are associated with different transfer functions from raw data and reconstructed image data. - In clinical practice, raw projection data is often much larger, i.e. not limited to 64x64 pixels. Acquisition of thousands of projection views will make it impossible to process a whole sinogram at once using the proposed neural networks. As the key idea of the paper does not work out, I can not recommend acceptance of this paper. Furthermore, issues in the learning setup are identified which cast severe doubts on the transferability to clinical practice.""",1,1 midl20_28_4,"""The paper describes deep learning approaches to (1) recover a motion blurring function from a noisy blurred sinogram, and (2) reconstruct an image from a noisy blurred sinogram. The first model is a simple 3-layer CNN and the second model is based on DeepPET. Both are trained and evaluated on synthetic data in which the original images were simple shapes. Both models seem to work on the data used for evaluation. Deep learning based reconstruction of PET data is an active area of research and shows potential for addressing important issues such as motion artefact reduction. The work is very preliminary the problem being addressed is not realistic and it is not clear that the method would transfer to the much harder problem of real data. The motivation and some details of the experiments are not clear and some terms seem to have been used imprecisely. The method is interesting but the nature of the data used for evaluation make this very preliminary work, and in my opinion not sufficient for a MIDL paper. I would recommend the authors to train and evaluate their method using more realistic synthetic PET data before resubmission to another forum. There are several PET simulation tools available that the authors could make use of, e.g. GATE.""",1,1 midl20_29_1,"""This paper proposed a few deep learning models to solve the fungi image classification problem. As stated in the paper, it is the first paper that focuses on using image classification help the diagnosis of fungal infections. The paper lacks detailed information about the implementation and model training process, which is very important to draw the conclusion mentioned in the paper. pro: This paper proposes to use deep learning for fungi microscopic images classification. The problem is interesting and impactful. The authors provide visualization of random clusters generate by the bag-of-word approach, as well as analysis from a microbiologist perspective. cons: There are several key points missing in the paper. 1) How the patch are generated from the fungi images and how many total number are used for training? 2) How is each model trained? Without a detailed training setting, it is hard to understand why inceptionV3 performs worse than AlexNet, or why the bag-of-words with InceptionV3 performs much worse than InceptionV3 itself.""",3,0 midl20_29_2,"""One particular strength of this paper was the authors choice of fungal types to train and test their models on. They specify that the chosen types overlap a decent amount with most common fungal infections. Another strength of their paper was the combination of multiple methods, AlexNet and bag-of-words to achieve a better performance than any one method alone. What are the consequences of misclassifying a fungal infection and treating it with the wrong drug? It seems like, while potentially a useful tool to aid doctors, DNN image classification would not yet be safe to use alone in identifying an infection. The papers state that Candida tropicalis and Saccharomyces cerevisae have the same main clusters. Would the image classification system not make similar mistakes in misclassifying similar looking images? It is unclear that this is a viable solution. It was stated that Preliminary diagnosis of fungal infections can rely on microscopic examination. However, in many cases, it does not allow unambiguous identification of the species due to their visual similarity. If visual similarity is not sufficient, how can one use the images alone to do classification? There is such a wide variation in deep learning methods - why is this? more information is necessary to understand the bag of words methods """,2,0 midl20_29_3,"""In this work, the authors present a machine learning method to classify microscopy images of different fungi strains. The main motivation is to shorten the time of standard mycological diagnostics from 4-10 days to 2-7 days. The main motivation and methodology are well described. The authors provide a comparison of different approaches and use cross-validation for the optimization of the parameters, which makes the obtained results more reliable. It is important to note the effort of the authors in using a bag of words approach which supports an explanation of what is happening inside the model. While it is not always possible, this approach facilitates the use of machine learning techniques in real applications. I would like to highlight some points that might be important to get a final version of the text: - I would consider changing the title to "" Deep learning approach to describe and classify fungi microscopic images"". - In the last part of the clinical process described in Figure 1, (Species identification), it is said ""99% of identity results"". I would not write this as it can be confusing and it is not the accuracy of the method highlighted in Table 1. - The data is divided into training and validation according to the preparations. Are these 2 preparations independent for each strain and always the same? - The authors say ""split our DIFaS database"". Could you please describe the acronym? - Could it be possible to specify the microscopy modality or set up employed in the image acquisition? - I miss a short discussion about the real benefits of this approach. In the abstract, it is said that ""... microscopic examination ... does not allow unambiguous identification of the species..."". Hence, I would like to ask the authors what is the real scope of this kind of approach, especially against a chemical test. Could it be possible to remove the chemical test from the clinical pipeline? Which are the main drawbacks of this method or what would be necessary to incorporate them into the clinical pipeline? - In Figure 1 there are two steps in which a microscopic inspection is performed. Could it be possible to work on the classification of the fungi using the images from this preliminary diagnosis? - Figure 2 contains some characteristic examples of the six clusters obtained from the bag-of-words. Could it be possible to add a label for each of the clusters? Also, it is said, ""it revealed that Candida tropicalis and Saccharomyces cerevisiae have the same main clusters"". Could you please identify which are the strains that belong to each cluster? Or what are the histograms of each of the strains? """,3,0 midl20_29_4,"""This is certainly a very useful application of ML. But many details are missing. Authors talk about Fisher and SVM but this is not visible in the Table 1. The combination of BoVW with deep nets is not explained either. Besides, showing the clusters (Figure 2) hardly qualifies as an explanation for the results. """,2,0 midl20_30_1,"""The authors propose a lightweight CNN model (< 1 MB) for locating potential tears in the knee on MRI images. The main contributions are two normalization layers (layer and contrast normalization) for 3D sub-images and the application of BlurPool downsampling. Promising results are shown on two knee datasets. The paper is well written and easy to follow. Even though the proposed model is lightweight (0.2M), it is shown to be on par or better than a recently published model called MRNet (183M parameters). The selected application (discovering knee tear) seems to be clinically relevant. It is not entirely clear how crucial the proposed multi-slice normalization and BlurPool layers are. An ablation study and comparison to established methods like batch normalization would have been valuable. The method adopts approaches from the literature (instance normalization, BlurPool) and applies them to the problem of knee tear detection on 3D MRI data. The paper is well written but would benefit from an ablation study to better understand the value of the individual layers in comparison to the standard approach using batch normalization. The results and model size of the proposed approach are enticing.""",3,1 midl20_30_2,"""In this work, the authors purposed a new deep neural network architecture for detecting injuries/abnormalities in the knee. The main contribution of the work was adding a normalization step to the network, and learning the affine transformation parameters during the training. The normalization was followed by a BlurPool layer to solve the shift variance. The paper is written very well, the implementation details are provided to help reproducing the results. The method was tested on two different datasets, which is impressive. The results of the model was compared also to the state of the art. From the following sentence, I understand that for each pathology, a different model was trained. If this is true, the model is not efficient. Contrast normalization yielded the best results for detecting meniscus tears, and layer normalization for detecting the remaining pathologies. The algorithm was explained very well. The results are also very nice. However, if different models were trained for predicting each parameter, not only training but also prediction would not be efficient.""",3,1 midl20_30_3,"""The paper proposed an interesting method (ELNet) to diagnose anterior cruciate ligament for knee MRI. By using multi-slice normalization and BlurPool. The cross-validation experiments for 2 different datasets show good improvement from previous state-of-the-art method. Hyper-parameter search is complete to get the best proposed model. But ablation study for multi-slice normalization and BlurPool lacks. The paper is well written and describes an interesting and relatively novel approach to solving knee diseases. The methods are well explained and results are well compared to previous state-of-the-art. parameters are well searched to get the highest performance. The key contribution is: Different normalizaltion methods are used for different diseases to boost the performance. BlurPool is used for the network. The proposed network achieves higher AUC and MCC. 1. The purpose of BlurPool how it improves the model is unclear. BlurPool is a pre-defined 3x3 kernel following by trided down-sampling, which may have been well used in the backbone. Ablation study lacks. 2. The proposed network applies different normalization for different diseases, but no results support the fact the for some disease, one kind of normalization is better than the other. 3. Lack of novelty. BlurPool is the only novelty of this paper, and layer-normalization/contrast-normaliztion acts more as a normalization search for different diseases. The result shows good improvement for knee diseases in MRI. Two different datasets are evaluated. Hype-parameters are well searched, but more importantly, the paper lacks ablation study for the proposed two novelty. And It is not convincing to take into account multi-slice normalization as novelty.""",2,1 midl20_31_1,"""The authors present a method for generating realistic, computational physics-driven 4D images of cardiac MRI. They use the XCAT model of the heart, generating segmentation labels at 25 cardiac phases from a physics-driven deformation of a biventricular heart model across the cardiac cycle. Cardiac Cine MRI data of 100 patients with matching biventricular labels are used from the AC/DC Challenge. Using the recently proposed SPADE GAN, a model is trained to generate synthetic cardiac Cine MR images conditioned on the segmentations either from the labels of the MRI data or from the segmentations generated from the XCAT model. At inference, synthetic MR images can be generated for a given set of labels, where the anatomical information encoded by the segmentations is preserved in the generational images. Furthermore, the specific style from a given MR image can be transferred to a generated image while producing anatomy that is consistent with a target segmentation. - The paper is well written and clearly motivated. - The paper provides an elegant bridge between highly-controllable, physics-driven model of cardiac deformation and the challenging domain of realistic medical image generation, helping to address the common challenge of limited labeled training data for applications such as segmentation and biometric quantification of cardiac image data. - The ability to control anatomical and physiological parameters to produce a wide variety of realistic segmentations across the cardiac cycle, and then to generate realistic MR data from these segmentations holds great potential for training task-specific deep learning models with limited data. - Although the overall idea is novel, the architecture novelties are very limited. - The authors explain that IN layers are removed from the encoder described in the SPADE paper for the VAE model. They do not show results demonstrating the value of this. - There is a lack of quantification of performance metrics. The authors demonstrate the power of combining physics-driven 4D simulation of cardiac deformation with recent developments in conditional image generation using SPADE to generate physiologically realistic cardiac MRI sequences which are anatomically consistent with a provided segmentation produced from the XCAT model. This holds great potential for generating training data in limited labeled data settings.""",3,1 midl20_31_2,"""This paper provides a method for 4D cardiac MR image synthesis. The model takes the XCAT heart model as the ground truth and employs SPADE GAN for conditional image synthesis. Then a style transfer network is used to adjust the style. The method could generate realistic and controllable 4D cardiac MR images. - The proposed method has generated controllable and realistic images in 4D. Synthesis image is a great way to combat the limited training data in medical images. - The proposed method is based on one recent publication on CVPR 2019, SPADE GAN, which is a spatially constrained method for image synthesis. - A well-known phantom model is used for image synthesis. - Related work are thoroughly cited and discussed. - The background in synthetic cardiac images could not be controlled, and are not consistent spatially and temporarily. - A brief description of the SPADE GAN would help the understanding and completeness of the manuscript. The writing could be more organized. - There is no quantitative analysis. This paper provides a method for generating realistic 4D cardiac images. The visual results are very promising with the ground truth region controlled. Lots of followup work could be inspired by this work, so I suggest acceptance of this paper. """,3,1 midl20_31_3,"""The proposed a hybrid method based on SPADE for cardiac MR images synthesis conditioned on segmentation labels obtained from anatomical structure of a physical cardiac model. A SPADE-like model is trained on ADCD dataset A controllable 4D model is used to generate segmentation labels similar to those of ACDC in a 3D+t manner, those labels are then used for 4D generation. By using this controllable 4D model, the proposed method ensures the realism of the anatomical structures in the generated images. The paper is well-structured and provides details of architectures and generation visualization. Besides the effort on generating realistic image, the method also considers the correctness of anatomical structures, which is critical for down-stream applications. In the presented results and animation, the quality of generation is visually good. It would be interesting to see if the generated images by the proposed method can be used to improve segmentation of cardiac images or images of other organs. Some improvements may be needed: 1) although the visualized results show good quality of generation, qualitative measures are also needed to evaluate the generation quality, such as FID, PSNR etc, and compare to the baseline methods. 2) the generated data is a 3D+t format, but it seems the method does not explicitly enforce temporal consistency in the generated images. The proposed method describes a framework to generate 4D cardiac MR images using labels obtained from physical models. The framework is based on SPADE-GAN and extends to 4D generation. The main advantage is that the meanings of anatomical structures are ensured and the generation quality is satisfactory. However, evaluation may be necessary to compare this method to the existing methods in order to better understand its advantage. More details should be provided to enable reproducibility. On the other hand, the weakness is that, the method does not seem to explicit ensure temporal consistency although this may be ensured by the physical model, some discussion/clarification on this may be needed.""",2,1 midl20_31_4,"""This paper applied SPADE-GAN for 4D cardiac MRI synthesis. Personally, I donot get the main ideas of this paper, it seems to me it is applying a technique (SPADE-GAN ) to a specific task (Cardiac MRI synthesis). The experiments are not well done at all. There is even no quantitative results or analysis. We cannot make a conclusion just based on visualization. As for significance, I think it should be further explored if the synthetic data are meaningful or not for clinic. I think this paper should be further improved before publication. 1. The problem, i.e., 4D MRI synthesis is interesting. Though many synthesis tasks exist in medical image synthesis field, they are limited in 2D/3D. 2. The visualized images are beautiful. There are many well presented images in the appendix and also including a 4d MRI in the dropbox. It is really cool. 1. This paper is really poorly written, very hard to follow. Sometimes, they are even inconsistent, for example, what's the contribution of this paper. 2. The novelty of this paper is limited, to me, it is applying a SOTA technique on a medical image synthesis task. If it is a pure application paper, then I'd like to read an entire story. 3. There are no quantitative results or analysis. 4. How to make sure if the results are reliable or not. 5. Many medical image synthesis tasks are not mentioned. The authors mainly introduce work from natural image field. 1. The paper is poorly written and hard to follow with limited novelty. 2. The experiments are not well done at all. There are even no quantitative analysis (We cannot make a conclusion just based on visualization.). 3. Though the topic is interesting, the novelty is limited.""",2,1 midl20_32_1,"""This paper has demonstrated a web-delivered deep learning tool for chest x-ray analysis. The authors have implemented/used prior deep learning methods for disease prediction, outlier detection, and prediction detection. They claim that this work can ""bridge the gap between the medical community and deep learning researchers"", which is not shown in the results. This work could bring interesting discussions between radiologists/physicians and deep learning researchers, but less than 1% of the MIDL attendees are radiologists/physicians. This work is a promising application that can potentially help radiologists and physicians utilize deep learning techniques for chest x-ray analysis in their clinical practice. The authors made their source code public. This paper is lack of experimental results to suggest that ""this is a solution to bridge the gap between the medical community and deep learning researchers"". It is very hard to imagine how a radiologist or a physician would use this tool to read an image from their PACS system. The audience of this paper should be radiologists, who are not likely to go to MIDL. The authors have used off-the-shelf deep learning methods to develop a web-delivered tool for radiologists and physicians. The authors claim that ""this is a solution to bridge the gap between the medical community and deep learning researchers"" but haven't shown any experimental results (e.g. user studies) to support this. This work will not bring interesting discussions to the MIDL community, most of who are ""deep learning researchers"". The authors should consider submitting this to a conference like RSNA. """,1,1 midl20_32_2,"""The authors present a software tool which implements the automatic scoring of chest X-rays based on a public dataset. The code will be made available by the authors. Their model can run in a browser using Tensorflow.js (client-side). To achieve this, the authors took their models trained in pytorch, and passed these through ONNX to Tensorflow and Tensorflow.js. Additionally, the authors provide a way of detecting out-of-distribution models. This can be useful when the radiologist uploads images of his cat, or when the X-ray can not be judged properly. - Well written. The paper is clearly about an engineering problem as there are no new methods developed. However, the line of thinking is present and easy to follow. - Code available, using public dataset - Adding an out of distribution detector - Thorough evaluation of the classification model - As there are essentially no new methods developed, it would be worthwhile to see how well this out-of-distribution detector actually works. Are there surely no X-rays which can be properly read by radiologist, yet get rejected by the model? The evaluation seems thorough, but is not convincing. The different metrics chosen seem to be a bit ad hoc. The paper provides no new insights into the problem of Chest X-ray classification. This was also not the intention of the authors, and they wanted to develop this as a tool to show the possibilities of deep learning and to build a prototype which can be easily used by many. I agree they have succeeded in this. However, the models themselves are out-of-the-box and the evaluation for the out-of-distribution detection is not convincing. It is however a complete prototype, and deserves to be shown.""",3,1 midl20_32_3,"""In this paper, authors describe a deep learning model, built with a DenseNet-121 architecture, to process chest x-rays and identify possible risks based on them. a Authors present a web based interface that can be used to aid doctors in identifying irregularities in chest x-rays or stand as a sort of second opinion. Both the paper structure and the prototype are clear. The web interface for the Chester prototype appears to be a very easy to use tool. I like that it identifies the image regions that influence the prediction, allowing for doctors to better understand what is happening within the system. I also like the visual bars of healthy vs. risk predictions. Authors appeared to go to appropriate lengths to train and test their model. They expanded their dataset by augmenting images through image rotations, scaling, and translations. These augmentations did not seem to harm their models performance. Another strength of this paper is the authors focus on patient privacy and discussion of the goals of this tool. I also appreciate that authors specify that this tool is intended to complement the skills of students, doctors, or radiologists. They also discuss challenges and possible solutions for those challenges. The authors seemed to have an answer to why they did pretty much everything. This is a very strong paper. There are no obvious weaknesses. As it is an application like paper, this warrants a paper more focused on results. This was the case in this paper and the authors do provide sufficient results This is a strong and well written paper. The authors justify their approach and it has no obvious weaknesses. Authors appeared to go to appropriate lengths to train and test their model. Both the paper structure and the prototype are clear""",4,1 midl20_32_4,"""This work tries to bridge the gap between image classification with deep learning in the medical field of chest X-ray diagnosis and medical practice. The authors build a web-based locally-running system to help the classification and its explanation in lung disease diagnosis. Their work is significant in helping medical specialists in such a diagnosis scenario with high system adaptivity which reduces the labor and increases trust. 1. A web-based system that allows applications in different system environments. 2. Thoroughly validation in the method. 3. A strict criterion to avoid outlier inputs that affect the prediction of the system. 1. Although outlier extraction/prevention is very useful to avoid incorrect predictions, there is no record on how much rate of the outliers is in a practical scenario. If the rate is moderate or high which might be true since different protocols are applied in different centers, the application would be very limited. For example, the Pneumonia dataset shows quite a different distribution in Figure 5 but they are also chest x-ray images. 2. For these outliers detected, there is no further heuristics/methods for chest x-ray images. This work allows a simple deep-learning classification network applied to chest x-ray images. It would definitely be of great favor for medical doctors during diagnosis. However, the application scenarios is limited by its stringent outlier detection. The system should also provide a method/heuristics to address problems when a chest x-ray is classified as an outlier.""",3,1 midl20_33_1,"""This work proposed an approach that generates radiology reports from chest x-ray images by splitting pathology-related sentences and the others. The authors should run a model without the pathology(abnormal) sentence generation and show the results to shed some light on how much gains we actually obtain from doing that. The experimental setup should be more rigorous. Does the training/validation/test split guarantee no patient overlap across the subsets? Do the results of the other methods come from their papers or experiments reproduced by the authors? """,3,0 midl20_33_2,"""Pros: The proposed method uses CNN for image classification on Chest X-rays and CNN-RNN structure is then applied to generate reports. Different strategy is applied for normal/abnormal cases. For abnormal cases, localized abnormal areas extracted from the first CNN is used for CNN-RNN to generate reports. The result shows good improvement and many state of the art methods are compared. Cons: During the second CNN-RNN step, for abnormal cases, localized abnormal areas and the global image is sent to the network separately, so for the normal global image part, the generated sentence may conflict the first step prediction. """,3,0 midl20_33_3,"""The authors present a system for generating radiology reports on chest x-rays and additionally providing disease classification and a heatmap to indicate areas of abnormality. Unfortunately I do not see novelty either of method or of application in this work. The results of the system are not well analysed or compared fairly with the literature. Only a single example of a system output is shown with no discussion of cases where various elements of the system fail or limitations. Specific comments: The methods to determine the classification and heatmap are not novel (DenseNet and GradCam), nor are the methods to generate the reports (Xue et al, cited). It appears that this method is simply a combination of previous works. There is no assessment of how well the GradCam method works to produce heatmaps - it seems to me that producing an accurate heatmap is likely to be one of the most difficult elements of the system and since the authors claim disease localization as one of their contributions it should be properly analysed. Both table 1 and table 2 show results indicating that this method outperforms others from literature (firstly in report generation, secondly in disease classification). However in all cases the comparison is unfair since the data used for testing is not the same. Not only the test set is not the same, but in particular the authors have filtered out images/reports that are ""non-relevant"" to the 8 abnormalities they are interested in. The other literature does not make any mention of this step. This filtering would make it substantially easier to achieve better results. In table 2, Wang et al work on a different (larger) dataset with 14 labels (many of which have some overlap/similarity of appearance) so it is not a fair comparison to simply pick out their results on the 8 labels being analysed in this work. """,1,0 midl20_33_4,"""In this short paper, the authors present a new fully integrated pipeline to diagnose common thoracic diseases. It first annotates the images by classifying and localizing common thoracic diseases with visual support. It then generates a report using a recurrent neural network model dedicated to natural language processing (specifically a an attentive LSTM model). A relatively high level description of the method is given, as this is a short paper. It however allows to understand how the method works. The method is also shown to perform well compared with Li et al. (2018) and Xue et al. (2018). The paper reads well and the methodology is relatively clear. The results seem promising. The authors however did not made clear what motivated their technical choices compared with Li et al. (2018) and Xue et al. (2018) and why they believe their method outperforms these methods. """,3,0 midl20_34_1,"""This paper is sort of a ""negative results"" paper. It is about testing a different (automatic and direct) approach to baby head circumference measurement in ultrasound, a very common procedure done in every pregnancy. Innovation in this procedure would have a significant impact on clinical practice. However, the authors found that their approach was not accurate enough. Probably others have also tried this approach, but not published it due to large errors in the measurement. Using a public dataset and describing methods clearly, the authors have made an exemplary effort to publishing reproducible research. The language reads well, the figures are informative and support the text nicely. The background literature is well-presented. By far the main weakness of the paper is that the results are far from applicable in routine clinical applications. We don't know if this is because the approach chosen by the authors would never work in practice, or because it just needs more fine-tuning or better training data/parameters. Figure 2 could be made more useful by including more parameters. State the size of the conv layers in each layer (not just the input) and/or state explicitly the resize factor of the pooling layers. Linear activation is very uncommon, needs to be better specified. The authors say that their future work will include VGG and ResNet testing. However, these networks are readily available in all deep learning environments. It should only take a few hours to test them on the given dataset. Why did you not test them already? Why wait for future research to do a task that should fit in maximum one day? The topic and the clear presentation warrants acceptance. However, my concerns listed under the ""Weaknesses"" section are numerous, so I don't feel particularly strong about this paper. Maybe the authors could make improvements to make a stronger case for acceptance.""",3,1 midl20_34_2,"""The proposed problem, measuring head circumference from fetal ultrasound images, is very interesting and important. It would be great to find an efficient and accurate way of solving it. The goal is to establish head circumference, a significant key feature in tracking development, without manual delineation or (automated) full brain segmentation. Providing robust quantitative estimates for head circumference from ultrasound images addresses an important question. THE HC18 dataset is used, providing a large training data set. Quantitative results from the literature are compared when evaluating performance. The submission is moderately poorly written, which makes reading it hard. The assembly of the proposed network/pipeline seems somewhat random and there is not much insight provided to the reader in the way of explanation. The authors use regression CNN, comparing two different types of architectures and several loss functions: CNN_1M and CNN_263K: What is the justification for these two systems? How did the authors come up with them? The current performance comparison is very approximate. Do the baseline segmentation-based approaches (used as performance comparison) not have any open-source implementation to try on the current data sets? Is there an std value accompanying the mean value from the literature? What are the low labeling costs mentioned for the new pipeline? Do those refer to the HC measures? Please, describe in more detail. The methodology is poorly explained and it is not clear what the main conclusion of the paper is. The results are below the segmentation techniques from the literature and have a higher variance than manual segmentation results. """,2,1 midl20_34_3,"""The authors compare two regression CNNs for estimation of fetal head circumference (HC) in ultrasound images. This is an useful and important application as head circumference is a key measurement for monitoring of fetal growth and estimation of gestational age. The paper describes two architectures to directly estimate the HC without using manually labeled data. The direct estimation of HC without using manually labeled data is interesting. Automatic measurements in ultrasound images are important as these are often challenging to interpret and inter/intra-observer variability is high. For measurements related to growth, like in this application, it is particularly important to have high precision measurements. Some points could be clarified: - The motivation for the two architectures presented. - Does the data contain 999 unique foetuses or 999 unique acuisitions? - What is the accuracy required by the clinical users for this application? - Is the error independent of HC? In other words, does the method perform equally well on early stage and late stage foetuses? - The authors claim ""small labelling cost"". This could be commented in the discussion. Interesting method although the motivation for the actual architectures is not well explained. The paper describes and important and well-defined application where automated measurements could have a great impact.""",3,1 midl20_34_4,"""The authors use a CNN to directly predict the fetal head circumference from 2D ultrasound images instead of first segmenting the fetal head circumference. They use the publicly available HC18 dataset and train two different CNN architectures using three different loss functions. The results in Table 1 and 2 do not show very good results, since the MAE is more than one cm, while other papers show results around 2mm. It was surprising for me to read 'By taking into account pixel size, ..., the mean absolute difference between predicted HC and ground truth HC in mm is 2.232mm (+/- 1.49)' The authors are the first to directly determine the HC from an ultrasound image. The paper is well written and clearly describes two experiments. They experimented with three losses and show nice training graphs that explain their conclusions. The results in Table 1 and 2 show a mean absolute error above 1cm, but the text reports an mean absolute difference of 2.216mm. I do not understand this difference. The HC18 dataset also has an independent test set, which makes it possible to compare the results of different algorithms on the same dataset. It would be good to determine the Mean absolute difference in HC for this algorithm on the test set and report the results, so other authors can directly compare their results. The authors mention that they perform flipping, translation and rotation, but do not mention the range (e.g. horizontal and vertical flipping? translation between ... and ... pixels/mm rotation of ... degrees). I also question if the rotation is a valid augmentation for ultrasound images, since the shadowing always occurs in the direction away from the transducer. The authors show a novel approach to this problem and have written an easy to understand paper about this. It is a weak accept, because the authors need clarify if the results show a mean absolute difference of 2mm or 14mm.""",3,1 midl20_35_1,"""This work tries to address robustness in image segmentation quality assessment by training 2 networks. A reconstruction network and a regression network. The result seems fine but there are some problems to attend. 1. The comparison is not clear. The authors do not specify which of the method they compared in the reference, where at least 2 methods were applied. 2. The pseudo-formula representing the adversarial attack level is numerically different from the referenced criterion. The authors need to address why there is a numerical difference and explain why they specifically choose these levels of adversarial attack. 3. The claim Our method also shares the merits of unsupervised lesion or outlier detection is not convincing. These referenced methods merits are not just training on normal data but more because of training on normal data allowing them modeling the distribution implicitly or explicitly using these generative models, whereas in this paper no such modeling on normal data is shown. 4. This work overstates developed two CNNs, where it just takes advantage of a U-Net and an Alex-net. The authors need to tone down on this claim or perform major modification on the architecture or objectives. """,2,0 midl20_35_2,"""This work outlines a method to estimate segmentation quality without learning from ground truth. Quality assurance is an important topic and should feature at MIDL. However, this abstract is very casually written ('people proposed', 'the network suffers', etc.) and some parts of the motivation are too broad, e.g. how are robustness and adversarial attacks connected? The first thing that would come to my mind regarding robustness is rather domain shift as stated in the Introduction, and probably the last are adversarial attacks. However, the abstract improves later on and reasonable justifications are given. The authors motivate their contribution de-emphasising irrelevant/adversarial features which seems reasonable. The presented results are not convincing. The differences To Robinson et al. seem to be minimal and well within the confidence margins. It is questionable if adding a task-dissimilar self-supervision loss should be able to improve an approach like Robinson et al. significantly at all. I think this needs thorough discussion and the abstract should avoid overstating that their method improves robustness of automated segmentation quality assessment. """,2,0 midl20_35_3,"""The authors proposed a method for segmentation quality assessment. The method consists of learning a reconstruction network for the masked input images and a regression network for quality assessment. The reconstruction network aims to only faithfully reconstruct input images masked correctly by the segmentation, while the regression network learns to assess the quality by looking at segmentation of different quality. The robustness of the proposed method is supported by quantitative evaluation with comparison to a baseline method. The underlying idea interestingly links to the earlier works in unsupervised detection and couples it with quality assessment. The paper is well structured with clear contribution and provides the key results. The results show that the proposed method is more robust against adversarial attacks. Additionally, a few concerns are: 1) REG-Net is trained to assess the segmentation quality by providing it with images of different segmentations, . What is the metric to assess such segmentations, or in other words, what's the ground-truth for pseudo-formula , and how is it obtained? 2) U-net is used as the framework for REC-Net. U-net have skip connections to preserve details for segmentation, however, using skip-connections for the very first layers for reconstruction may leak much information and make the task very easy, is this the case for REC-Net in this work? 3) It may be useful if the authors can give the loss function for the proposed method in the paper.""",3,0 midl20_35_4,"""The quality and clarity are high in this work. Pros: The method is designed for unsupervised lesion or outlier detection. Only normal data is utilized in the training of the reconstruction network. The method achieves better mean absolute errors of dice prediction under different levels of adversarial attack. Cons: The performance is evaluated on the adversarial attacks. The performance on lesion and outlier is not evaluated.""",4,0 midl20_36_1,"""When there is no ground truth, it is difficult to directly train a CNN to do deblurring. This paper use delurring result from another method, and simulate blurred input using a physical model. In this way, the paired image follows the physical model of blurring. Pros: The method respects the physics in MRI. And the result is very promising. Cons: (Minor) The first experiment did not compare with Lim et al., 2019a.""",4,0 midl20_36_2,"""The authors show that a CNN deblurring approach gives promising results for real-time spiral readout MRI. The work is reasonably convincing, as a first step, and achieves impressive-seeming results. Clarity is good, with the authors explaining both their problem and proposed solution. Originality lies mainly in the application: there does not seem to be a lot of pre-existing work in the literature. Significance seems sufficient for a short format paper there may be practical issues with the authors' approach compared to reference approaches, but this is a promising first step. On balance, I recommend this paper for acceptance. Pros: * Original work on an interesting application. * Convincing evaluation of performance. * Well-written and clear. Cons: * Unclear to what extent Figure 3 is truly representative it might be beneficial to include several example images. * Limited discussion of the quantitative results in Figure 2. * No quantitative results for experimental data (due to lack of ground truth). Understandable in a short paper format, however.""",4,0 midl20_36_3,"""Sprial sampling of MRI data is very time efficient may requires special reconstruction algorithms to reduce artifacts. This method introduced a CNN-based method to deblur spiral-sampled MRI data. A novel method was introduced to synthesize distorted data with augmented field maps. But the IR with ref field map seems to have better performances than the proposed method.""",3,0 midl20_36_4,"""(+) the paper is nicely written and easy to follow (+) the idea of synthesizing data with blurring related to long spiral acquisition for subsequent training of a residual network for artifact reduction is reasonable (+) the graphical abstract in Figure 1 is neat. (Note that the . is missing in the caption) (+) evaluation is performed on both, synthetic and real test data (-) evaluation on the synthetic test data is limited to MFI and IR based on the reference field map, only. A comparison of your method to MFI and IR with estimated field maps would be interesting. (-) the runtime evaluation (12.3 ms per frame) provided in the conclusion should be part of the experiments section. Please note duration of the comparative methods as well. I recommend acceptance of this short paper.""",4,0 midl20_37_1,"""This paper presents an interesting experiment that may be helpful for others too in finding better neural network parameters. I'm not sure how many people at MIDL are familiar with ENAS. I'm not. Don't use acronym in the title. And explain better in the text what ENAS is. In 2.1, you say you augmented the ultrasound image with 90, 180, etc. degrees rotation. Ultrasound is a directional imaging modality. Rotating beyond 15-20 degrees doesn't make any sense, and probably doesn't help in algorithm training either. You also say that these augmentations create 7 extra images. Why not implement a data augmenter that manages training data and applies random rotations when the images are requested? AlexNet is a bit old, and designed for two GPUs. I'm not sure why didn't you pick a more modern network.""",3,0 midl20_37_2,"""This paper addresses one of the problems in deep learning, but the contribution is limited. The author uses an existing approach (ENAS) on a breast cancer dataset. So, it is not clear what is the author's contribution in terms of methodology. The size of dataset is appropriate and the paper is well written, but the contribution is not very clear to me. """,2,0 midl20_37_3,"""pros: - relatively large dataset of 524 US images (however, unclear from how many patients; potential bias) - histologically verified classification - automatic network optimization - reasonable optimization time cons: - sloppy handling of references (examples see below) - how do you ensure that the limited search space of ENAS does contain the optimal solution? - how sensitive is the approach to scan parameters like frequency of ultrasound, directionality, Time gain compensation (depth compensation), reconstruction modes of US scanner, ...? - data augmentation does not consider characteristics of ultrasound image formation - boundaries of lesion only drawn by a single person - only two classes are used; at least normal tissue would have been useful - no sample images provided comments: I don't see how a paper named ""Estimates of incidence and mortality of cervical cancer in 2018: a worldwide analysis"" is a good reference for breast cancer. And indeed, scanning through the paper does not reveal much more than a few comparisons which backs your first sentence of the introduction. Please provide a more suitable reference. Your sentence ""However, no CNNs have been designed and optimized automatically (...)"" is not fully correct. E.g. in Y. Weng, T. Zhou, Y. Li and X. Qiu, ""NAS-Unet: Neural Architecture Search for Medical Image Segmentation,"" in IEEE Access, vol. 7, pp. 44247-44257, 2019. a UNET is used in combination of NAS for ultrasound nerve images. Since segmentation is a form of classification for individual voxels, your statement has to be corrected. Type in caption of Figure 2: Redaction Cell -> Reduction Cell""",2,0 midl20_37_4,"""The authors implement use ENAS to classify benign and malignant breast lesions. It is an interesting approach that reaches an accuracy of 89.3%. The authors compare the ENAS results to AlexNet and CNN3 and show that their ENAS models give the highest performance. From the results it seems that the AlexNet just does not converge at all. The CNN3 network shows an accuracy of 78.1%, but it is unclear how these networks were trained. Two small points: (1) The authors mention that the input size is rescaled, but the original image size (in pixels and mm) is not mentioned and it is unclear if the acquired images were made with a preset, or that the sonographer was able to adjust zoom, gain etc. (2) The authors augment the images using rotation of 90, 180 and 270 degrees, but this seems invalid since ultrasound images contain shadows which are always directed away from the transducer (so downwards).""",3,0 midl20_38_1,"""This paper proposes a semi-supervised learning method for surgical video anonymization. The proposed method is first trained on a roughly labeled dataset containing unlabeled part as noise; then the training set keeps changing during iterations in training process. The method is interesting and well organized, especially for the visualized results in experiments, while more details could be given to make the work more solid. Surgical video anonymization is a very interesting and significant research topic and it is worth being discussed as privacy become increasingly important in research and products. This paper introduces a deep learning model to solve this problem, and proposed a semi-supervised learning way to avoid frame level labeling. The paper is well organized and presented. 1) The title is a bit confusing. Anonymization does not exactly describe the thing the paper and the proposed method focus. What the paper is doing seems like to denoise surgical videos in a temporal dimension, removing irrelevant frames in surgical videos. 2) The usage of semi-supervised is also confusing. The training data is only labeled for the before-starting and after ending parts of the video, while considering the irrelevant frames inside the surgery as relevant. It is more close to handling noisy datasets, but kind of different from general semi-supervised learning. 3) The validation set should be fully labeled. Only in this way, could the iteratively changing the training dataset be evaluated with certain metrics. This paper has good overall quality and a weak accept is appropriate. I would suggest a high rating if the presentation, wording is more accurate and the experiment design is better designed and taken.""",3,1 midl20_38_2,"""The paper presents a deep learning-based framework for the classification of surgery-related and non-relevant segments in surgical videos. For this purpose, the ResNet-18 model has been employed and a weakly-supervised approach combined with iterative semi-supervised learning has been proposed to annotate the training dataset. The performance of the proposed method has been evaluated on laparoscopic cholecystectomy videos. This is an interesting work which fits well to the scopes of the conference. The paper is well written and easy to follow. The method seems theoretically sound and the references adequate. The performance evaluation had been based on the analysis of surgical video sequences. The technical novelty of the proposed method is limited. A state-of-the-art network has been used for data classification and the main contribution is the different approaches to generate the training dataset. Also, the clinical motivation is not very strong. As stated at the ""Weaknesses"" section, the presented work is of limited technical novelty and the clinical motivation is not very strong. I believe this paper is not ready to be accepted for publication at this conference.""",2,1 midl20_38_3,"""The presented paper aims to label and remove irrelevant sequences from laparoscopic videos. This is done with manual labelling and a ResNet-18. Motivation is based on anonymisation and data cleansing. Iterative refinement is claimed to be semi-supervised learning. Several experiments are proposed and results are presented. - automatic patient data anonymity and data cleansing are important topics - the results look good with a big but (see below) - this is clearly an application paper, testing well known methods in a new scenario. - No effort has been made to fuse the proposed pipeline into a medical-image analysis specific methodological contribution. Why is for example the output temporally smoothed instead of using spatio-temporal consistency in higher dimensional networks? Why hasn't the semi-supervised paradigm be explored in more detail instead of only using a few biasing iterations with user input? - A radical ablation study is clearly missing here. The task itself would imply that a deep network classifier is potentially an overkill. Bluntly: surgical parts are predominantly red, non-surgical parts anything and blue/green. How would a generic linear classifier on the image histograms perform here, or perceptual hashing with a linear classifier on top? Do we really need a labelled ground truth here? Can't simple heuristics perform at least as well? Assessing in-focus will even get rid of blurred frames and frames as discussed in the Appendix. There will be domain shift problems for the simple methods but same is true for the presented method. - Writing, experimental setup and methodological proposals need to be improved and condensed. I have been working in this field for many years and published papers about these topics. I am advising regulatory decision makers and do active research in clinical environments. I am advocating open data access and reproducible research.""",2,1 midl20_39_1,"""The authors present a fast multi-parameter MRI acquisition based on compressed sensing (MRF). Instead of estimating parameters using dictionary matching, the authors perform parameter regression using a CNN for rich spatio-temporal regularisation. This is the first work (to my knowledge) to successfully measure the full diffusion tensor information with MRF, which the authors claim is made possible by the improved regularisation from their CNN. The paper's experiments examine data from 20 subjects, 9 healthy and 11 with multiple sclerosis. The authors compare their method with several more standard reference acquisitions (DTI, DESPOT1/2), mostly finding good agreement with their (accelerated) method. This work marks an intriguing step forward in combining advanced MRF-type acquisitions with deep learning based parameter regression, and I believe it would be a valuable contribution to MIDL. Development of an MRF-style acquisition for full DTI information (not just ADC) simultaneous with other parameters. This is among the first such work to be successful in estimating the full DTI information. Development of a U-Net approach for regularised parameter regression, which builds upon previous successful applications of CNNs in diffusion imaging parameter estimation. The authors claim, convincingly, that this is necessary for successful parameter estimation from their accelerated acquisition. Clear comparison against suitable reference acquisition methods, across a convincing number of experimental subjects (N=20) with different pathologies (9 healthy subjects, 11 with multiple sclerosis). This gives a higher degree of confidence in the authors' results and their future potential. The examination of MS micro-structural changes also allows for a slightly more in-depth investigation of the method. Relatively brief analysis of results. In the authors' defence, they do include several very informative example figures, and their Table 1 is quite rich in information. Nonetheless, it feels as if the paper spends a lot of time on Methods, and not much on Results. Ideally I would like to see more informative comparisons of the different failure modes for different statistics. Although the authors describe their CNN architecture and pre-processing in reasonable detail, many details would benefit from further justification. The clearest example of this is the ""Data pre-processing"" paragraph. The authors make several claims about the benefits of their chosen normalisation, but no data are presented to back these up. Ideally there would be an ablation study for some of these details, giving more weight to the authors' claims. In practice, even a discussion of the authors' preliminary experiments while developing the method would be helpful. In a similar vein, the authors emphasise that dictionary matching becomes intractable as the number of parameters increases. However, they do not provide any estimate for how long this would take in their problem it would be good to explicitly show this is impractical in their setting, rather than simply claiming it. The authors present a meaningful advance in diffusion imaging, at the interface of acquisition and post-processing. They have significant novelty within each of these areas. Their experimental validation is fairly high quality, and their results are intriguing, and promising for future work.""",4,1 midl20_39_2,"""This paper introduced a diffusion MRI fingerprinting sequence with varying flip angles and diffusion encoding sequences. Then a deep convolutional network was introduced to estimate the parameter maps using imaging data from this novel sequences. Different from previous methods, this paper focused on the estimation of diffusion tensors instead of the mean diffusivity. 1) This paper introduces a novel diffusion MRI fingerprinting sequences with varying flip angle and diffusion encoding sequences. 2) The algorithm can simultaneously estimate the relaxation parameters and diffusion tensor. 3) The estimated parameters were validated using experiments. 1) The diffusion direction shown in Figure 3) seems to be biased. 2) The DTI model is not suitable for crossing fibers, which should be mentioned. 3) The method was trained using healthy subjects. Thus it may be not suitable for analyzing patient data. This is very complete and solid work on relaxation and diffusion fingerprinting imaging. Separate acquisition of diffusion MRI and relaxometry images takes a long scan time. The method introduced in this paper could significantly reduce the scan time. The paper integrate a novel sequence with a very suitable algorithm. Although it has limitations in practical application, it is still a very novel and interesting method paper.""",4,1 midl20_39_3,"""This paper describes a novel MRF pulse sequence that allows for DTI to be acquired alongside T1 and T2. While standard MRF uses a dictionary matching technique, this paper opts for a U-net for reconstruction of T1, T2, and diffusion tensor images. The paper shows reasonable concordance between the U-net reconstructed images and the images acquired using standard techniques. The paper is clearly and succinctly written, describing the experiment in adequate detail. The paper analyzes the performance of the algorithm on reconstructing different maps in different types of tissue. The paper claims to be a proof of concept and demonstrates the possibility of such an acquisition scheme but does not provide any comparisons to other methods. For example, the introduction mentions other diffusion weighted MRF techniques but the paper makes no attempt to compare the technique at hand to them. The paper would benefit from some kind of baseline comparison or discussion of related techniques. It is very possible that the U-net could learn a model of a typical image map. The paper cites the regularizing effect the U-net could be due to the imposition of spatial correlations but it is possible that this could be due to the network's ability to memorize the training data instead. Some discussion as to why this is not the case would be good. The paper uses an MS dataset. RMSEs within the MS lesions are reported but more discussion would be nice. It seems safe to assume that the MS lesions add heterogeneity to the data. Demonstration that the network accurately reconstructs these regions would allay concerns about the network's ability to memorize the training data. The method seems novel and interesting. However due to the lack of baseline comparisons or an in depth discussion of related techniques, it is unclear the degree to which the method represents an improvement.""",2,1 midl20_39_4,"""The authors present a deep learning based reconstruction of quantitative T1 and T2 maps as well as directional diffusion information from diffusion-weighted MRF data. They chose a DL based approach since conventional dictionary based methods do not scale well to larger parameter spaces required for diffusion-weighted MRF. For the reconstruction the authors used a classic U-Net. The approach was evaluated on healthy subjects as well as on MS patient data by comparing the obtained parameter maps to data obtained using the classical approaches for T1/2 MRF and diffusion-weighted MRI. The overall methodology seems sound and the results might be valuable for the MRF community. The approach is much faster than the reference methods. The authors performed validation of healthy as well as patient cases, which is crucial for such learning based image generation approach. The deep learning part of the presented work is a straight forward application of an existing method (U-Net) and lacks originality and novelty. This would be ok for an MR focused conference such as ISMRM, but not for a DL focused conference such as MIDL. The claim ""we achieved a comparable reconstruction performance"" seems arbitrary. There seem to be significant differences between the results of the proposed method and the chosen reference. It remains unclear how these differences might influence subsequent analyses. Unfortunately there is no phantom based analysis enabling an absolute quantification if the method yields better or worse results then the reference methods. The paper is methodologically sound but it lacks novelty or originality in the deep learning aspect. Simply applying a U-Net to a problem that is not of very broad interest is not enough to be accepted at a deep learning conference. This work would be suitable for an ISMRM submission since the MR part seems much more interesting.""",1,1 midl20_40_1,"""The authors train a standard architecture on several datasets of fundus and SLO retinal images and report performance for each of these experiments and also cross-modality testing. Special emphasis is given to the impact of patch size used for training the models. Experiments show the somehow interesting fact that training on SLO is not a very good idea if the model is to be used for standard fundus images. - Few papers (maybe none) have reported experimental analysis on training in one modality and testing in another without re-training. It is interesting to know that training on fundus and testing on SLO may be ok, but not the other way round. - The analysis on patch size is also a good point of the paper. - A good amount of datasets is considered in the experimental section, more than usual. - When I first read the title I thought the authors were going to propose a method that would learn to segment retinal blood vessels simultaneously from different modalities, given that it has the words ""cross-modal learning"" on it. However, that is not the topic of the paper, but rather what happens is that they have ""cross-modal evaluation"", which is a very different thing. - Numerical results are relegated to the appendix, and the paper's experimental section is left as quite weak in my opinion. No single numerical result is present in the main paper, only two sets of graphs that show the evolution of accuracy, sensitivity and specificity as a function of patch size. In my opinion, it is not fair to tell the reader that s/he should go to the appendix to see the interesting part of the paper, which is how well does this method work on the different datasets. - I don't think it is correct to rely only on accuracy, sensitivity and specificity at a given threshold used to binarize the predictions of a CNN. Area under the ROC curve should be reported, and also f1-score or some other metric that does not depend so much on class imbalance (accuracy is rather useless in this problem, it will be super-high for almost every method you consider). - I believe at the very least some baseline performances of other methods should have been reported together with what is in the paper now, just to know how does this compare with current state-of-the-art (I understand it will not be the bets method one can find, but that is no reason not to report some comparison). The experimental evaluation is not very well designed and reported in this paper, which makes it hard to understand the conclusions. Also, without looking in the appendix it is impossible to know about the actual outcome of reading this paper. In addition, I don't think there is much relevance/novelty to what is proposed in this paper. Maybe the authors could consider doing actual cross-modality learning by learning in some way jointly from both modalities. If a method trained like that would outperform methods trained only on SLO or only on Fundus images, that would be more interesting! """,1,1 midl20_40_2,"""This paper aims to analyze the effect of the number of patches and respective sizes to the retinal vessel segmentation performance. To this end, comprehensive analysis has been performed using the U-net based framework to show its generalization ability. It also studies whether knowledge obtained from Fundus Photography (FP) image is transferable to another imaging modality, namely Scanning Laser Ophthalmoscopy (SLO) image. There are in total six public datasets that have been used for the validation and analysis. 1. The study about whether results were transferable from color fundus photography to SLO images is interesting. 2. The investigation of using varying patch sizes and the fusion of image of different datasets show the generalization ability of the framework. 1. The motivation and results are not explained clearly in Abstract and Introduction section. 2. Although extensive analysis and evaluations have been performed to show the practical values of this work, the methodology contribution is modest. This paper presents a comprehensive analysis of the deep learning based retinal vessel segmentation framework on multiple image modalities. This paper is in general well-presented and easy to follow. The comparative analysis shows some interesting aspects. However the investigations can still be better improved by including more specific studies on important issues like the segmentation on challenging vessel structures.""",3,1 midl20_40_3,"""The work looks into the use of deep learning for image segmentation of vessels from two retinal imaging modalities: fundus photography (FP) and scanning laser ophthalmology (SLO). A U-net was trained, validated, and tested using six publicly available datasets, without data augmentation or pre-processing. Four of the datasets consisted of FP data, while the remaining two consisted of SLO data. Different combinations of training and testing were carried out, e.g. Train FP & Test FP; Train FP & Test SLO, etc. Experiments were carried out using different settings in order to identify the effect of number of patches used for training, as well as chosen patch size, on segmentation performance. Well written, interesting problem to explore, literature review covered a good number of papers. I found the use of different data source for training and testing particularly interesting, including the finding that training only on fundus photographs could result in good segmentation on SLO data. 1. It is not very clear what the main goal of the paper is. Is it to simply show that deep learning can work well for vessel segmentation? Is it to show that this can be done without data augmentation or pre-processing as mentioned several times? If so, the paper didn't show any experiments with data augmentation or pre-processing for comparison. Also, what advantages does not carrying out data augmentation could have? Perhaps a reduction in training time? Currently the paper shows that U-net works very well under different settings and on different data, but it is hard to grasp what the goal of the paper is. 2. The paper discussed how important DSC is, but did not actually report on DSC as an evaluation metric. 3. It is difficult to understand which set of experiments do the graphs in Figure 2 and 3 refer to, please clarify on the figures. 4. Key results are actually shown as tables in the appendices, which some readers might not refer to. The work is interesting, the paper is well written, and a lot of experiments were carried out. However, it is not clear what the main goal of the experiments is, and how the goal relates to the findings. """,3,1 midl20_41_1,"""pros: This paper tries to discuss if the GAN-based data augmentation could improve the bone segmentation from ultrasound images. cons: 1) The paper fails to give detail about why and how the visual inspection is introduced. According to the authors, the inspectors only make sure the ultrasound looks real, then store them as snapshots. How do these snapshots improve the segmentation performance? Are the generated images have been checked with the ground truth label images? 2) GAN is dynamically updated and balanced during the training. I do not see a strong motivation to add human interaction to augment the dataset.""",2,0 midl20_41_2,"""The authors evaluate the effect of data augmentation -- generated using a 15 fold pix-to-pix network -- in improving an Ultrasound Bone Segmentation Model. They show while having data augmentation helps perse, the multifold aspect of it is not useful. There are some major concerns with the paper which I will list as follows: 1) the pix-to-pix network is not a generative model but rather an image translation model. There it's a one to one mapping network. There are variations of that model that augment the pix2pix with a stochastic variable which one can sample from. E.G Toward Multimodal Image-to-Image Translation. I recommend the authors to include this model in the evaluation since its a simple modification to the pix2pix network. 2) While the authors do evaluate the effect that data generation has on the downstream task (segmentation task) they do not evaluate the image translation models. This could be done qualitatively by showing, conditioned on a segmentation mask, how the images from different image translation networks look like and what degrees of variation we can hope to achieve using the 15 fold framework. Based on these comments I don't see the paper ready to be presented as is. I encourage the authors to improve the submission and try again. """,2,0 midl20_41_3,"""Segmentation of bone surfaces from intra-procedure ultrasound data is an important step in ultrasound-guided surgical and non-surgical procedures. However, this is a challenging task as low signal to noise ratio, imaging artifacts, user dependent imaging result in large segmentation errors for traditional image analysis methods developed to provide a solution to this challenging task. Most recently deep learning-based approaches have been investigated by various researchers. However, as the authors mention, scarcity of bone ultrasound data for training hinders the widespread adaptability of these methods. The manuscript tries to investigate the effect of GAN for simulation of ultrasound data from manual segmentation to improve the training data size. Although the work tries to address, and important problem important details are missing. Major and minor points are provided below. Major The authors do not provide any example images on what kind of bone ultrasound data was used for training or generating. High quality ultrasound data corresponds to high intensity bone interfaces followed by low intensity bone shadow artifact. If these two important features are present in the data collected for testing I dont think a well-trained (using the ~3,000 scans for example) Unet would have low error values. Usually Unet fails if the collected ultrasound data is of low quality: Low intensity blurred bone boundaries, artifacts in the bone shadow region). This is usually the result of wrong orientation of the ultrasound transducer with respect to the imaged bone anatomy or when imaging complex shape bone surfaces such as the spine. The work would be of stronger impact if GAN are used to generate low quality ultrasound data. In summary more details about what king of data was used is missing. I was having a hard time to understand what the terminology used for GAN1X GAN 2X. GAN 15X stand for? Does GAN2X mean the pix2pix GAN architecture generates twice the size of the trained data? This should be clearly explained. Only single evaluation metric was investigated. Sensitivity, specificity, F-score, average Euclidean distance (usually used to report the localization accuracy of bone segmentation) was not investigated. Reported DICE values are very low. The authors are not discussing this. Was there a specific reason for this? The data size used for training is not too small! As a follow up comment: Most recent state of the art methods for bone segmentation, based on deep learning, have achieved significantly improved results over Unet (see below for references). These methods combine multi-feature images as an input to Unet together with the traditional B-mode ultrasound data. The authors should include these and discuss how if the reported dice values could be improved using multi-feature CNN architecture. Wang P, Patel VM, Hacihaliloglu I. Simultaneous segmentation and classification of bone surfaces from ultrasound using a multi-feature guided CNN. InInternational Conference on Medical Image Computing and Computer-Assisted Intervention 2018 Sep 16 (pp. 134-142). Springer, Cham. Alsinan AZ, Patel VM, Hacihaliloglu I. Automatic segmentation of bone surfaces from ultrasound using a filter-layer-guided CNN. International journal of computer assisted radiology and surgery. 2019 May 1;14(5):775-83. El-Hariri H, Mulpuri K, Hodgson A, Garbi R. Comparative Evaluation of Hand-Engineered and Deep-Learned Features for Neonatal Hip Bone Segmentation in Ultrasound. InInternational Conference on Medical Image Computing and Computer-Assisted Intervention 2019 Oct 13 (pp. 12-20). Springer, Cham. """,2,0 midl20_42_1,"""In this paper, the authors try to segment cells in 3D microscopy images. They use a publicly available dataset to validate their results and compare it to a recent publication on the same task with the same dataset. One of the issue's with segmenting cells in these kinds of images is that clusters cells are often recognized as 1 object. To overcome this issue the authors propose an auxiliary task to point at the center of mass of every cell. - The authors propose a method to ensure that clusters of cells are well-segmented as single objects. - Comparison to other publications on the same task/dataset. - Good baseline with multiple approached and recent tasks. - Clearly described paper, and an approach that could be used Some areas for improvement are included below: * Introduction - ""in terms of average AP."" Abbreviation used without explanation. * Method - Could the authors give more information about the architecture of the proposed model. This is unclear from the text. * Experiments - The method was tested on just 7 images. In my opinion, this isn't a lot, and the authors should consider using cross-validation. - How much more computational power is needed for this auxiliary task. Because the results only show a minor improvement over the baseline, the method shouldn't be much more computationally expensive. * Conclusion - The results of the proposed method are slightly better compared to the baseline models. Could the authors explain how they could improve the results. The paper proposed a method to better segment clustered objects for cell detection in 3D microscopy images. the method and experimental setup sections leave room for improvement, cause some details are not clear. The results show a slight improvement over a recently published paper but the authors don't explain well how the results could be improved. """,3,1 midl20_42_2,"""This paper proposes an auxiliary task for segmentation of objects instances that appear in dense clusters. Target objects are densely packed nuclei captured by a microscope. The proposed auxiliary task is regress vectors that point from each foreground pixel to the center of mass of the respective nucleus. This auxiliary task can be added to architectures of segmentation CNNs. The experiential results of three segmentation methods with and without the proposed method show the 1-3% improvement of segmentation accuracy. The proposed method regress vectors pointing to the center of object and evaluate the regression error for the loss for learning. This idea looks similar to the idea of the auxiliary task with distance transform maps, where the distance from the center to boundary, which is given by ground truth and prediction result, is used for the computation of loss. In the experiments, the authors did not compare the other auxiliary task and loss functions. Theoretical or experimental explanation of the difference among other relevant methods is required for fair validation. The idea of proposed method is interesting and experimental results showed some improvement of segmentation accuracy by the method. However, theoretical or technical relevance among other methods is not presented. Comparison among the proposed method and other relevant auxiliary tasks are required to show the validity of the proposed method.""",3,1 midl20_42_3,"""* The authors present a DL method to segment nuclei in 3d microscopy images * Additionally to learning whether each pixel position belongs to a nucleus, they let the network learn a 3d vector pointing to the center position of the respective nucleus * They also compare their method with a pure detection-based approach * The paper is well written and easy to follow * The approach is mainly well described and motivated * The authors can show improvement (even if very small in some cases) of all methods they extended with their approach Regression of a vector pointing to the center of the nucleus has been done by Xie et al. (2015)* * Their work should be cited and be differentiated from the authors work * Adding the auxiliary loss to the main loss requires weighting. In this case, no weighting was described, so we can assume that both losses are just added up without a scaling factor. This *may* lead to a good balancing of the losses, but does not have to. An explanation of that (missing) weighting should be added * It is not clear to me, whether the vector output of the +cpv methods is used to create the final segmentation. This should be described more clearly *Xie, Yuanpu, Xiangfei Kong, Fuyong Xing, Fujun Liu, Hai Su, and Lin Yang. Deep Voting: A Robust Approach Toward Nucleus Localization in Microscopy Images. In Medical Image Computing and Computer-Assisted Intervention MICCAI 2015, edited by Nassir Navab, Joachim Hornegger, William M. Wells, and Alejandro F. Frangi, 9351:37482. Cham: Springer International Publishing, 2015. pseudo-url. * The authors present a well-written paper about their method * The method has very good aspects such as being easily integratable into other methods * Results of the method are promising * Major drawback of the work is that the authors are not relating their own work with Xie et al, who already described vector-based nucleus detection 5 years ago. * Although this work differs from Xie's method, it should have been mentioned and discussed in the related work section""",3,1 midl20_43_1,"""The paper presents an approach to tackle genetic and radiology images fusion via image painting of nodules in non-neoplastic lung images. An end-to-end deep learning GAN is presented with a bicephalic structure, processing simultaneously a gene map and a background image, and outputting a generated image and an associated predicted tumor segmentation mask. The system is trained using 3 discriminators, intended to discern generated images, segmentation maps mismatches and gene map mismatches. Experiments are conducted on a public dataset involving a total of 130 subjects. Most results are qualitatively reported. The paper is well written: ideas are clear, language is formal, and most of the expected related works and introductory materials are present. There is substantial effort in benchmarking with appropriate approaches, although the task is relatively novel. Experiments on public data should be reproducible with the details provided by the authors. There is not enough information about the metrics extracted. How are MSE, SSIM and PSNR computed ? between generated image and background image ? Although this is arguable, results in Figure 3 indicate that there is less variability of shapes of proposed generated images (last row), than in Park and al. and less than the images associated to the input gene codes. Apart from this aspect, there is no doubt that the proposed yield the best generation quality. However, this could contradict one of the 2 claims of the authors, which is ""that a discriminative radiogemonic map can be learnt via this synthesis strategy"". The gene coding results section does not tackle this aspect since it is only related to genomic information (taken from vectors outputted by the transformer). T-sne is also only a projection of the data onto a subdimensional space and is therefore arguably a good representation of the true data distribution. Besides, what is the meaning of colors in t-sne results? Are those training samples for both 3 methods? End of 2.2 is in methods and should probably be in results. What is the impact of the segmentation aspect of the approach? No benchmark is performed without segmentation, and no segmentation performance is reported. Overall, readers could benefit from more information about the loss behavior during training to better understand each discriminator's impact. For broader audience target, authors should specify how to use the network to extract fused radio-genomics features maps for other tasks, as it is one of the 2 objectives. There is a lack of evidence in the second claim of the paper about fused representation, reducing score to weak accept. Strong accept if authors show further evidence of fused representation of both genetic and imaging data.""",3,1 midl20_43_2,"""The paper presents a method to combine gene code with image features so as to generate synthetic images. The proposed network takes background image and gene expression and generates an image of lung nodule which is characterized by the genomic data. Along with the lung nodule image which is located within the background image, the network also generates a segmentation mask. Experiments are performed on NSCLC dataset. 1. The idea of generating images of lung nodules characterized by the gene code is quite interesting. 2. The overall approach can be considered as an original work. 3. The paper includes both qualitative and quantitative results. 1. It is mentioned in section 2, that inpainting leads to the loss of regional information and doesn't ensure spatial continuity. This might be not be completely true, as the goal of inpainting is to have realistic filing and a good inpainting network should take care of both regional information as well as spatial continuity. 2. In section 2, it is not clear, why there is a need to generate a segmentation mask and weight image. 3. In section 2.2 why map (-) suppresses the nodule information, as the background is already fed in as the input image. 4. Ablation study pertaining to the three discriminators is missing. 5. In figure 3, the fourth and fifth rows seem similar, so it would be difficult to visually evaluate them. 6. Overall, the clinical motivation to generate nodules images characterized by gene code is not convincing and may not be of great interest to the readers. The paper tackles an interesting problem of generating nodule images conditioned on the gene expression data. The authors present extensive qualitative and quantitative results along with clustering results to show the radiogenomic correlation.""",3,1 midl20_43_3,"""The authors have developed a multi-conditional generative adversarial network (GAN) conditioned on both background images and gene expression code, to synthesize a corresponding image, in a dataset of NSCLC from TCIA. This paper talks about a new multi-modal data integration in the medical imaging research, using GANs. The paper has great potential, but leaves the reader hanging with no proper conclusion. A few additional application based sub-experiments can make this paper really impactful. * Key Idea - GANs can be used in a multimodal integrated approach for radiogenomic studies * A very impressive idea with tremendous potential * Well written paper that clearly explains the methodology * As a reviewer, I would have liked to see some validation. For example, once this radiogenomic integration method was established, the authors should have done a more deep dive analysis of how the synthetic nodule correlates with genomic information and the features of the original nodule. This was lightly touched by the tSNE, but this needs to be fleshed out more. * Another potential approach - Use the TCIA cases.. they have multiple radiology and genomic datasets. Test your methods and its correlative analysis across various cancer sites. A good methods based paper. The paper talks about the ways of how genomics and radiology data can be integrated. This is very relevant and aligned with the MIDL workshop and its vision. The paper has also been written well, with informative figures.""",3,1 midl20_43_4,"""The proposed aims to explore the correlation between genetic codes and medical images by synthesising images using conditional-GAN. The multi-label conditional GAN takes in encoded gene information as the style to generate pathological regions, such as nodules, on given background images. The conditional GAN follows a similar structure as style-based GAN by Karras et a and MC-GAN by Park et all. The paper combines the knowledge from two domains, i.e. the image and genetic information, from an interesting perspective. The network architecture, losses and training details of the proposed method are clearly written. The method outperforms the in-painting method and the baseline method, namely multi-conditional GAN by Park et al, in terms of several evaluation metrics. it also shows better clustering capability of gene codes than the other two methods. The motivation for the work is not very strong. The authors pose the problem: metagene clustering and image feature extraction are done as two separate tasks without considering their correlation. To improve this, one would like to have more complicated image feature representation, rather than hand-crafted ones, and take advantage of the correlation between genetic data and image features. However. there seems to be a missing link between the proposed method and the motivation in the introduction, as the proposed method does not directly answer the posed question, except for the clustering analysis. Maybe the authors would like to discuss how the conditional generative model can be used for better feature representation or exploration of correlation. Also it might be interesting to observe, for a given segmentation, what information from the gene expression data is encoded into the gene code. The paper overall is well-structured and thoroughly evaluated. The perspective is also very interesting. However, it may require a smoother connection between the motivation and the proposed, and provide more analysis for the clustering part.""",3,1 midl20_44_1,"""The authors propose a method that performs firstly, a 2D segmentation (U-Net) and secondly, a tracking (SiamFC). The results of the tracking are analyzed to determine which ones belong to cell collisions or mitosis. The identification of any of these two patterns is used to refine the initial segmentation. The proposed approach is tested in three different datasets from the Cell Tracking Challenge and the authors show to improve some of the best accuracy measures reported so far. While none the segmentation or the tracking are novel methods, the authors introduce the mathematical definition of collision and mitosis into the workflow and manage to improve the performance of the methods, especially in the cases in which cell displacements and/or density are large. this approach is interesting due to the lack of a priori probabilities as other tracking algorithms and the definitions provided are easy to understand/explain. Additionally, the authors validate the proposed technique in publicly available datasets, which allows an objective benchmarking. Finally, they provide a Python code that can be used to analyze new datasets. There are some parts of the methodology that are not very well written, which makes it very hard to understand. It is not clear at all what are the differences between the three TAS, how the data augmentation was performed or how was the output of the U-Net processed. The current definition of collisions or mitosis works only with time steps of size one. Especially for collisions, this might be critical as cells can be together for more than one frame. How does the algorithm deal with this? Besides, I wonder what is the effect of false negatives in this part of the tracking. While the results suggest that this approach improves the tracking accuracy measures, could it be possible to give some examples of mitosis and collisions? Is it possible to visualize the real power of this part of the method? The example posted in Figure 2 is not very representative as part of the collisions are in the edges of the image. From a high level, the approach proposed by the authors lacks of novelty. However, the definition of cell behaviors results in a more accurate tracking, as expected and this could be also a way to improve this common task. I would strongly recommend the authors to elaborate more in the description of the method. The details regarding pre and post-processing, or model finetuning should be adequately reported. """,2,1 midl20_44_2,"""The paper proposes a simple end-to-end cascade neural network to model the movement behaviour of biological cells and predict collision and mitosis events. They use U-Net for an initial segmentation and refine it further using a siamese tracker along the temporal domain. Their method demonstrates that this tracking approach achieves state-of-the-art results on PhC-C2DL-PSC, Fluo-N2DH-SIM+ and DIC-C2DHHeLa dataset of the cell tracking challenge benchmarks. The key ideas & experiments are well-written, explained and accompanied by source code. The proposed Tracking-Assisted Segmentation (TAS) is achieved via - TAS General, TAS-Intermediate & TAS-specialized combined with Collision Detection, Mitosis Detection helps with re-segmentation and fine-tuning the final results. The key significance of the presented ideas in this paper are: Using siamese tracking for improved temporal correspondence and re-segmentation of erroneous predictions. Robustness to morphology variations and able to model rare events such as mitosis, apoptosis and cell collisions. Generalization on 3 different biological cell benchmark datasets & outperforming state-of-the-art segmentation methods. The paper is well-written, with appropriate references, background research, benchmark datasets, sufficient experiments, experimental details and source code. The paper addresses the issue of cell tracking / segmentation while cells deform due to the process of mitosis and collision. They propose a Siamese tracking approach to detect such events and combine them with deep learning (UNet) and traditional computer vision (watershed) methods to achieve state of the art results. It would be interesting to see some ideas from recent literature like Transformers / Attention is all you need and how could these be applied to cell tracking challenge. Figure 2 in main paper is not that informative, in fact Figure 3 from the appendix Schematic representation of cell collision detection using a Siamese tracker which is a major contribution of this work. The paper is well-written, with appropriate references, background research, benchmark datasets, sufficient experiments, experimental details and source code. The paper addresses the issue of cell tracking / segmentation while cells deform due to the process of mitosis and collision. They propose a Siamese tracking approach to detect such events and combine them with deep learning (UNet) and traditional computer vision (watershed) methods to achieve state of the art results. The proposed methods and improvements are very valuable but needs more experiments and clarifications to be validated. """,3,1 midl20_44_3,"""The paper presents a method for segmentation and tracking biological cells in video sequences using Siamese tracking. The method includes an end-to-end cascade architecture to model biological cell tracking and predict collisions and mitosis. The evaluation is performed on three cell tracking challenge benchmark datasets. The model is designed to be robust to morphological cell variations and predict events such as mitosis and collisions. The end-to-end cascade neural architecture and different configurations are explained for model interpretability. The validation is performed on three well-known benchmark datasets, where state-of-the-art performance is achieved for two datasets and second-best in the third dataset. The paper is well-written and organized. The computational complexity of the proposed method and its comparison with state-of-the-art methods is not included. The authors have not emphasized on the clinical application of the method. The manual tuning parameters in TAS-intermediate and TAS-specialised are not clearly explained. Details about experimental implementation and illustration are included in appendices, but these could be included in the main paper. The authors state in the contribution that the proposed method outperforms the state-of-the-art on three datasets but the results show this on two out of three datasets. The paper describes a cell tracking method using Siamese tracking. The paper has a clear motivation and shows validation on three benchmark datasets with promising performance. The method demonstrates invariance to cell morphology and the ability to handle mitosis and collisions.""",3,1 midl20_44_4,"""This paper proposed a Tracking-Assisted Segmentation (TAS) method to leverage the segmentation performance on cells by adding siamese tracking information. The performance is leveraged based on deep SiamFC tracker and U-Net segmentation. Even each piece in the method is from the existing method, to aggregate the methods is not trivial. The SiamFC tracker is pretrained on GOT-10k dataset to have better generalizability. The method leverages the segmentation performance by separating the wrong segmentation cases such as collision and mitosis. The comprehensive measurements and validations are presented, by comparing with the state-of-the-art methods and top performer in cell tracking challenge. The values of using Collision detection and Mitosis detection are shown in Table 3. The method is designed as a multi-stage system, which is not compared with the single-stage cell segmentation systems in MICCAI 2019. We don't know the comprehensive comparison between TAS-general vs. TAS-specialised vs. Zhou et al from Table 2. Only Fluo-N2DH-SIM+ is provided for such comparison. The writing and organization of the paper needs to be improved. Some contents are difficult to follow. For example, Mitosis detection section. The performance is marginally worse/better compared with 1st place in the challenge. The Figure 2 is not quite informative, maybe switch to a better visualization. Even the system is designed as multi-stage. But we can see the author spent lots of effort to optimize the workflow. The method is well validated with top teams in cell tracking challenge. The Collision detection and Mitosis detection are formed as detection problem to leverage the segmentation performance.""",4,1 midl20_45_1,"""The authors propose an image registration based method to perform semi-supervised learning of medical images. The method involves registering an unlabelled image to a label image, passing it through segmentation network to produce segmentations that has maximum mutual information with the segmentation of the labelled image. Experiments show that the performance of this method using a fraction of the data reaches dice scores close to the dice scores of networks trained with the entire dataset. 1. The paper is an easy read, however requires a check for grammar at several places.. 2. The methodology is interesting and simple. 3. The experimental setup is well thought and the results are good. 1. The use of mutual information to compare transformed segmentations is unclear. Why not use dice scores instead? 2. There a couple of works that use image registration to perform semi-supervised, single shot learning (Chaitanya et al IPMI, Dalca, et al CVPR). 3. The use mutual information for semi-supervised learning is not novel as claimed in the paper. 4. Where is the clustering part? The title is quite misleading. 5. MI and KL are intimately related. If you maximise one, the other is minimised. However in the cost function they have the same sign. In addition, one can just add a factor of 2 to MI term to get a similar effect. Thus the impressive improvement of dice when compared to mutual information experiment is unclear. While the idea of the paper is novel, the methodology seems very ad-hoc and not very well thought. The results are good but counter intuitive and need better explanation. A better study of current literature in this space is also essential. """,2,1 midl20_45_2,"""In this paper, the authors propose a semi-supervised segmentation method using a combination of cross-entropy loss for supervised training and mutual information loss plus consistency loss for unsupervised training. The method is validated on three different datasets and proved to be effective in semi-supervised segmentation tasks. However, the technical novelty is limited. (1) This paper is trying to solve an important problem in medical image segmentation tasks, i.e., how to make full use of unlabeled data to promote the performance of the model. And give a feasible approach to tackle this problem. (2) The paper is well written and easy to follow. (3) The proposed method is validated on three different datasets and achieves good performance. (1) The novelty is limited. This method simply combines several loss functions to solve a semi-supervised segmentation problem, i.e., the cross-entropy loss for labeled data, the mutual information loss from ICC method and consistency loss from (Bortsova et al., 2019) for unlabeled data. (2) The author claims that the IIC method (Ji et al., 2018) is used to pre-train a segmentation network and needs to be fine-tuned on labeled images. But I didn't find such descriptions in the IIC's paper. And I think IIC can provide segmentation results without fine-tuning on labeled images. Just need to find the correspondence between the clustering results and the ground-truth labels. Although the method in this paper is not novel, it indeed solves an important problem and achieves good performance on three different datasets. I'd like to accept this paper if the authors can address all my concerns.""",2,1 midl20_45_3,"""This paper proposed to use mutual information and consistency regularization for semi-supervised learning. Mutual information is conducted on patch level and consistency loss is conducted on pixel level. Experiments are performed using three medical imaging datasets. The experimental results look impressive. 1. Good writing, easy to read. 2. The proposed method is sound. Mutual information on the patch level and the consistency loss on the pixel level sounds reasonable to me. 3. Extensive experimental results showed that the proposed method is better than many well known semi-supervised learning methods. I have not yet found any significant weaknesses in this paper. If there is any, I would say it seems to me that the mutual information might be more essential here but the experiments show consistency loss gives better numbers than the MI loss. It would be much appreciated if the authors could provide more insights on this matter. Furthermore, the results are only reported with a single number each. It is strongly recommended if the authors could run all your experiments 5 times and report the mean and std of the evaluation results. Semi-supervised learning is an important topic, especially for medical image analysis. This paper is well written. The proposed method is clear and well-motivated. Experiments show strong performance of the proposed method. """,4,1 midl20_45_4,"""In this paper, the authors proposed using a clustering loss based on mutual information that explicitly enforces prediction consistency between nearby pixels in unlabeled images, and for random perturbation of these images, while imposing the network to predict the correct labels for annotated images. In addition, they proposed to incorporate another consistency regularization loss which forces the alignment of class probabilities at each pixel of perturbed unlabeled images. Experimental results on several public segmentation datasets demonstrate good performance. 1. In the semi-supervised setting for medical image segmentation, supervised loss and unsupervised loss are combined to overcome the requirement for large-scale image annotation. 2. Two unsupervised terms including mutual information and regression loss are derived to further improve the performance. 3. Extensive on the public datasets validate the efficacy of the proposed method. 1. In the mutual information term, to what large region for the neigbor when forcing the mutual information constraint? 2. For the regression term, are there other transformations to further be explored? 3. A detailed methodological illustration comparing the proposed method to the existing SOTA semi-supervised methods. The proposed mutual information based deep clusting is interesting under the semi-supervised learning task. The extensive experiments validated the efficacy of the proposed method. Overall, this paper is well written and it can contribute to the category of the semi-supervised learning methods. """,4,1 midl20_46_1,"""It is a new method to detect and localize organ bounding boxes within 3D CT using reinforcement learning (deep Q learning). The reasonable action space and reward function are carefully designed for the application. The experimental results are okay with necessary comparison with other baseline methods. The paper is well-written and well organized. Experimental results support the claim made in the paper. The idea using reinforcement learning methods for bounding box detection in CT organ localization is relatively new. The action space and reward function are carefully designed. The paper presented a new application of reinforcement learning in medical image analysis. The task of organ detection/localization has been studied for decades. The detection task itself is somehow simplified since each subject contains one major organ only. It would be interesting to see how the proposed method is applied on some challenging task, e.g. vertebra localization. The performance of the proposed approach does not show clear advantage over previous state-of-the-art methods. Overall: the papers idea is interesting, but the performance of the proposed method has not shown clear advantage (both in accuracy and efficiency). The similarity of the proposed method is similar with the literature both in medical image analysis and computer vision.""",2,1 midl20_46_2,"""The paper proposes to use reinforcement learning (Deep Q-Learning) to find the correct slice to localize a given organ in CT. It claims to be the first work for organ localization with RL and the results are promising in comparison with other non-RL methods, especially in scarse data scenarios, tested on multiple organs. - The work is carried out on a public dataset. If the authors release the code, reproducability of the results would add value. - Comparison with other methods is presented - The method is sound and seem to work fine - Evaluation is thorough and draws a clear picture. - Using a discrete deep RL method to solve a problem which naturally calls for a continuous action space. - Given the discrete action formulation, the authors seem to have missed to disclose the size of the translation steps (t) taken at each step and how annealing this value might have improved the results - The paper describes the Deep Q-Learning (DQL) algorithm in much detail; however, given the history of the method, the details could have been minimized. - This is not the first plane localization using RL in the medical imaging domain. There are similar works such as pseudo-url who also use DQL for localization. The authors seem to replicate the work of [Alansary, 2019] on landmark detection by changing the problem to organ localization. However, there have been other attempts to use RL for plane localization (and with the exact same RL method) which robs the paper of the novelty in application. """,3,1 midl20_46_3,"""Although the authors presented the first RL approach for organ localization, their work has limited technical contribution as being an extension of the RL based approach for landmark localization proposed by Alansary et al., 2019. The modification made by the authors is in the output of the Q network where, in addition to the six actions (left, right, up, down, forward) used in the landmark localization, there are now five new ones (zoom in, zoom out, flatter, longer, wider). The method is evaluated on VISCERAL data set and the results are in-line with SOTA. (1) The authors have shown that reinforcement learning can also be used on the task of organ localization. (2) The method is evaluated on a publicly available dataset. (3) The method is clearly presented. In addition to the lack of technical contribution, my main concern is the evaluation of the method. Although the authors used a publicly available dataset (VISCERAL), none of the methods they compare to has been evaluated on this dataset. On the other hand, Xu et al. (2019) made the annotation of the LiTS dataset as well as their code publicly available for comparison. Thus, authors were able to directly compare with Xu et al. using the same dataset or running the available code on the VISCERAL dataset. Moreover, the authors used only one split for their method evaluation (70 images for training and 20 CT images for testing) which is not the best strategy for evaluating the model performance on a limited dataset. Authors should use k-folds cross-validation. Due to the above mentioned, a direct comparison of the methods is not possible. However, if we just compare the numbers, the proposed RL method only clearly outperforms RF-based methods, whereas, in comparison to the CNN methods, is at best in-line. This is probably why the authors decided to go for the experiment with a limited number of training images. However, the experiment with only seven training images is not clearly explained. How were the seven images selected, did authors performed cross-validation? Moreover, other methods have not been evaluated on a reduced number of training images, so no comparison can be made. Finally, the authors should not claim that CNN methods would have needed hundreds of training examples to successfully localize organs, since e.g. Xu et al. method used 118 images (compare to 70 used in this work) and achieved the same localization results. The RL is novel direction in MIA community that has not been evaluated on the task of organ localization. The presented results are in-line with SOTA, although they were not evaluated on the same dataset. """,3,1 midl20_46_4,"""This paper proposed to use RL to localise organs in CT scans. The authors propose to introduce five new scaling actions for the agent's view. Alansary et al. observed the the performance of different training strategies highly depends on the target. This has not been discussed in the presented work. Also, the proposed method would end up with target-specific agents. Would it be possible to link the training and exploration of the task-specific agents to a multi-agent system through sharing their CNN weights? This has been proposed in previous work to make training and inference more robust and accurate and it should be included in this work (not only as future work in the very last sentence). Overall, the paper is well written and presents good results. However, methodologically it is almost indistinguishable from previous work in this domain. The contribution of the proposed scaling actions has not been thoroughly studied. An ablation study has been done with respect to the number of training samples but equally important would be to study the agents' performance when using only six translation actions instead of the proposed 11. Hence, the only novelty can be found in the application of RL to CT organ localisation. - application of RL is relevant and can lead to alternative research directions - the paper is well written - the comparison to other CT organ localisation work has been done well - it has been shown that RL agents can learn from little data - the ablation study is incomplete and would also need to investigate the contribution of the scaling actions - multi-agent systems would need to be compared to rpoperly base on state-of-the-art - It's only the 'first-time' that RL has been evaluated for this very particular niche task The area and application is interesting and if the authors manage to address my concerns it should be an interesting oral presentation at MIDL. Please remove this minimum character requirement. What else shall I say? I have been personally invited to review for MIDL so I should have competence to do so. A paper-matching system is in place to make sure the reviewers match the paper topic, so what else shall I justify here? """,3,1 midl20_47_1,"""This paper proposed laplacian operator based loss as an extra boundary enhancement loss for the segmentation networks and applied their method on two datasets to make an evaluation. pros: 1. Since laplacian operator can measure the curvature, laplacian based loss may contribute to boundary enhancement as the paper states. 2. The paper is easy to understand. cons: 1. Some figures is meaningless, for example, Fig. 1(a). 2. It is reasonable to use series of single-channel 3D convolution to approximate the laplacian operator? Since it is 2nd derivative. Please convince me using either equations or visualizations. 3.The comparison experiments are simple, since there are many boundary-related losses, you only compare with one. 4. The Dice shown in Table I cannot convince me that the laplacian operator is better than the one with boundary loss. Performance is the same in Task 1. For task 09, it seems all networks can achieve high Dice. Then task 09 is hard to prove anything. Suggestions: 1.Can you please list the standard deviation? Can you please also provide the computational cost? 2. Why UNet's Dice is so low in Task 1?""",2,0 midl20_47_2,"""The paper presents a boundary enhancement loss to enforce additional constraints on optimizing machine learning models, which takes the merit of discrete Laplacian filtering and L2 loss to emphasize the boundary regions. The experiment shows the effectiveness of incorporating the proposed loss on brain tumor MRI segmentation and spleen CT segmentation.""",3,0 midl20_47_3,"""This paper is about improving the boundary of a predicted object in the task of image segmentation. Comparing to other losses, they show that they get on par or slightly better performances. The authors propose a novel loss, that uses the Laplacian filter (which can be implemented efficiencly as successive convolution layers), and then minimize the L2-norm of the filtered output and filtered ground truth. Notice that this doesn't require to define the contour on the continuous softmax predictions, which is often difficult or intractable. They get very good results on two different datasets. Many more work and evaluation can be done for that loss, and I am really looking forward to see an more detailed version of this work. But the current form definitely deserve a spot at MIDL2020. Misc: - [Kervadec et al. 2018], is actually a MIDL2019 paper """,4,0 midl20_47_4,"""This paper proposes to improve the segmentation quality of boundary areas in medical images. It proposes a loss function that is inspired by Laplacian of Gaussian (LoG) filtering for edge detection. This proposed method is claimed to be light-weighted. Pros: 1) This loss function is inspired by Laplacian of Gaussian (LoG) filtering, this formulation is suitable for the task of medical image segmentation. 2) This paper is well-written. 3) This loss seems to be easier to implement than competing methods and has competitive results. Cons: 1) -- As inspired by LoG, this loss is not novel. 2) -- The paper says it does not require post-processing. However, the convolution operation seems to be post-processing. 3) -- The authors argue that this loss is light-weighted, but there is no quantitative evaluation on the computational cost and time consuming compared with other works. 4) -- The results are only comparable to other methods. It also would be interesting to see the combination of this loss with other methods. Basically, this is a paper with a simple idea and insufficient experiments. As this is a short paper I recommend weak accept, but it is actually not good enough. I would not be upset if it is rejected.""",3,0 midl20_48_1,"""The manuscript evaluates ways for the training of networks integrating information from multiple-view mammography images for the classification of malignant and benign cases. Due to the discrepancies in the CC and MLO views, training of the sub-networks could suffer from uneven training gradient which causes suboptimal results. Different integration strategies and regularization strategies are compared. The manuscript addresses a legitimate issue in training networks integrating multi-modality data and the experience learned from the study has value guiding similar practices in such training tasks. Latest techniques in the filed are identified, cited, and compared. Since the overall approach is largely inspired by previous publications (Wu et al., 2019; McKinney et al., 2020) the novelty of the posed network is limited. Also the proposed model architecture which makes slight modification of existing models fails to outperform previously published results. More details need to be provided, e.g. whether bilateral images are used at the same time. The comparison study seems to be conducted in a hurry and results are incomplete. Also due to the high level of performance by previous publications, e.g. Wu et al., 2019 and McKinney et al., 2020, the study fails to improve the level of performance and therefore provides incremental value to the field.""",2,1 midl20_48_2,"""This paper explores the idea of classifying breast X-ray images to find tumors by using two different image orientations at the same time. This is a very interesting topic in DL-based medical image processing, therefore perfectly fits in the scope of the conference, because human radiologists also consider images taken from multiple orientations. We are not sure how to integrate the knowledge from X-rays of multiple orientations, therefore the kind of research this paper is presenting is very important. Unfortunately, this paper is making only very limited steps towards achieving its ambitious goal. The topic is very interesting. Integration of knowledge from multiple image orientations is definitely something that we need to use. Not just in this, but many other imaging applications. The authors present a clear and comprehensive review of previously published methods, which is very valuable for readers. The experiments are presented in details, mostly in the appendix. The amount of training and testing data is very impressive. The results and conclusions of the paper are very limited. The difference between experimental tests is small. The conclusions and lessons learned are limited. However, the authors could make an effort to extend the paper with a Discussion section. As usual in scientific papers. Currently, some thoughts that would belong to the Discussion are mixed in the Results and other sections. A link to your source code would be even more useful than the appendix. The topic is very relevant and interesting. The authors are well prepared and present the literature accurately. The only reason I'm not recommending accept super strongly is that the results are not very interesting. Probably due to the difficult problem the authors trying to solve.""",3,1 midl20_48_3,"""In this paper, the authors show that classifiers trained using multiple views or modalities tend to rely too strongly on one or the other of the input branches. The hypothesis is that this problem occurs because each branch of the model learns at a different speed and contributes to the training loss at a different scale. To address this issue, the authors investigate different ways of training the model and different regularizers. The experimental results show that weight sharing among the different branches and modality dropout are boosting the performance of the classifier. I enjoyed reading the paper. It is very well written and structured. The context and issue are clearly stated. The hypothesis is well defined and backed by experiments. The solutions investigated are interesting, particularly the choice of regularizers. Heatmaps are also given as input to the model. How are they influencing the model performance? An experiment with only the images would be interesting to see. Training the classifier to predict begnin findings is described as an auxiliary task for regularization. It would be good to have a baseline experiment where the classifier is trained only for the main task to see the influence of this regularization. The model variants section could be extended. In Figure 2, the fusion of the 2 branches is done with an element-wise sum. As the problem addressed is the fusion of the information extracted in the different branches, it would be good to investigate other fusion alternatives (feature concatenation, product). The paper is well structured and the pain point and hypothesis are well explained and illustrated by experiments. The proposed solutions are meaningful. The experimental part could be extended a bit but the results are already interesting.""",3,1 midl20_48_4,"""This paper utilizes two images captured at two different angles for breast cancer diagnosis. In particular, images from both the views are fed to the Resnet-22 separately and later, fusing of FC layer is being done, followed by, two binary classification predicting presence or absence of malignant and benign findings. Multi-view information fusing is state-of-art technique to improve the classification performance in medical imaging. Results are being presented using different variants of multi-views information fusing. I find the results to be compared and validated with state-of-art techniques for breast cancer diagnosis. This would be interesting to see where the paper is standing in terms of existing research. Discussion of results should be more elaborate. Paper is well written and easy to follow. It is combining muliple views together to have better classification performance. Comparison of proposed method with some recent methods in the literature is needed before acceptance. """,4,1 midl20_49_1,"""The authors proposed a 4D encoder-decoder CNN with convolutional recurrent gate units to learn multiple sclerosis (MS) lesion activity maps using 3D volumes from 2 time points. The proposed architecture connects the encoder and decoder with GRU to incorporate temporal information. It's compared to an earlier method which uses a 3D network and time-point concatenation and reports improvement in Dice scores, false positive rates and true positive rate. The improvement gained by the proposed method validates the effectiveness of recurrent units, and the most significant gain is from the false positive rates. Meanwhile, a few clarifications may be necessary: 1) in term of runtime, does the addition of GRUs take much more training time and memory comparing to the concatenation of 3D volumes? 2) what is the dimension of input, is it W D or H W D$ ? If it's the latter one, is the convolution done with a 4D filter? 3) more details about the convGRU may be useful, for example its architecture. Overall, the problem the paper tackles is critical, and the proposed network component is effective to some extent. The conclusion is more like a validation for the usefulness of the temporal information, while technical novelty may not be very sufficient in this case.""",2,0 midl20_49_2,"""Authors present their work on identifying MS lesion change (appearance and enlarging lesions). This is a well-written abstract and an interesting method. The method uses GRU modules to include two or more images of the patient to identify lesion activity. I assume the method processes FLAIR images, but this is not 100% clear to me. Figure 1 seems to suggest that lesion maps are fed into the model in stead of FLAIR images. Can authors clarify this? Authors identify three time points for each subject: HS (an early scan), BL (baseline, comes after HS), and FU (follow-up, the most recent scan). In the results, models are compared on T=2 (BL and FU) and T=3 (BL, FU, and HS). T=3 seems to work better and authors suggest that the added history might help. However, an alternative hypothesis for this improved performance could be that the difference between HS and FU is much larger than BL and FU; hence adding HS works. It would be interesting to also add T=2 with HS and FU; because I suspect that it is just the longer time period between HS and FU that leads to increased lesion activity that is easier to detect. It is unclear to me whether authors used a separate validation dataset for optimizing the hyperparameters of their model. Or that the reported results are on the test set that was also used to select the best performing parameters and results? Did the human raters have access to HS when annotating lesion activity? Authors look for 'new and enlarging' lesions (abstract): what about disappearing lesions?""",4,0 midl20_49_3,"""The idea of the paper is good. Results support the idea and gives an increase in the performance compared to baseline methods. Considering the page limit, the paper is well written. It would be nice if authors can provide a reference for lesion-wise false positive rate (LFPR) and Lesion-wise true positive rate (LTPR). """,3,0 midl20_49_4,"""The paper proposes to include recurrent layers in an encoder-decoder architecture to improve the segmentation of lesion activity. The method is clearly justified and introduced and shows clear improvements over a baseline that concatenates encodings of different time points. The extension of including recurrent connections for a temporal problems seems rather obvious but still requires effort to get working correctly. The simplicity of the approach, the applicability to different temporal applications, and convincing results make this paper a nice and interesting read. The paper lacks clarity in the exact experimental setup and I personally would have liked a better introduction to lesion activity segmentation. Fig. 1 seems to suggest that the paper aims to segment the differences in lesion segmentations for two different time points. The figure might be slightly misleading assuming that HS is taken before HL. HS shows some lesion in the top right corner that isn't present in BL. Is that regular behaviour for those tasks? - Furthermore, it seems that the authors did not use a validation set for developing and validating their method but might report results that are overfit to the test set. - It is unclear how the full-volume activity maps are generated: how is are overlapping patches aggregated? Is there and consideration for boundary effects? - The metrics seem not clearly defined: does FPs mean voxel-wise false positives? How are lesion-wise metrics defined? - Lastly, do you ensure that the baseline has a similar capacity as the model including GRUs? Is there a similar memory footprint or number of model parameters? If not, this might not be a fair comparison. It would be interesting to explore whether this model can generalise to unknown lengths of history. Also, could it be helpful to have intermediate segmentation or activity supervisions to use the varying time points as some training signal also?""",3,0 midl20_50_1,""" The main idea behind the paper is to make network training more robust to noisy annotations, specifically False Negatives (FNs). The paper proposes to use the bootstrap loss function to handle FN annotations. The paper is easy to follow, proposes a good method and validate it against a private decent size dataset. + Paper is easy to follow with clear motivation about the proposed method. + Method is well validation with two different types of artificial data censoring. + Results on the evaluated metric shows the usefulness of the proposed method. + Implementation details are clear and should make the paper easily reproducible if the dataset is made publicly available. + Limitations of the work are clearly noted and don't claim to solve all the issues with noisy annotations. - There are issues regarding Equations:2 and 3. - In the paragraph following Eq:2, it is mentioned that "" pseudo-formula means 90% of our loss comes from the CE between our predictions and our (potentially noisy) target annotations while 10% of our loss comes from the feedback loop of the bootstrap component."" This is False as Eq:2 is y ) + CE( y , argmax( y ))) clearly pseudo-formula will reduce the effect of classical cross-entropy loss (first term) to 0.9 but second term is still weighted 1. - In the last sentence above Eq:3 it is mentioned that ""With pseudo-formula , this loss simple reduces to class-based loss weighting where positive cases are unweighted by pseudo-formula "". This is False as pseudo-formula will simplify Eq:3 to 1[Y==1] * CE(Y, y ) + 1[Y==0] * CE( y , argmax( y )))$. It is clear that Here, for negative class there is no classical cross-entropy loss term and it is only weighted by CE loss between prediction probability and predicted classification one-hot encoding. - Justification for calculating Dice for only TP is necessary. Is the reported dice value on a lesion level bases or a volume level bases? - In size based lesion censoring all metrics are reported for all lesion loads, it would be nice if these are reported for small lesions and not small lesions separately as in this experiment only small lesions were censored. - Authors note that in size based lesion censoring of small lesions, their proposed method still misses a lot of small lesions. Can they please justify in this case what is the main benefit of the proposed method. The paper proposed an interesting approach to tackle noise in the label annotations. The Paper is well validated. But there are issues with the Equations of the proposed loss function. The evaluation metric also needs better justification. """,2,1 midl20_50_2,"""This paper presents an approach to regularize the training of a segmentation model to reduce the impact of false negatives in the training set. The proposed loss gives more weight to the positive class while penalizing the entropy of the predictions on the negative class. The evaluation is performed on a recently published dataset for brain metastasis segmentation. - The paper tackles an interesting problem related to practical issues of human errors and labeling times for medical image segmentation. - The problem is well defined, justified and presented, and it feels like the authors know well the medical aspects. - This paper misses on a large part of the literature on noisy label learning. This problem has a significant number of publications every year that are not discussed at all here. - The formulation of the loss is not intuitive nor explained clearly enough, with imprecise notation (see detailed comments section). - The experimental protocol is not thorough: the hyper-parameters of the loss are tested within very limited ranges and miss key values (1 for pseudo-formula and 0 for pseudo-formula ); the censoring is not explained properly as it is not clear if the data is censored randomly at the start of the training or censored differently for every iteration (which would make a significant difference in the stochastic case). - This work does not compare with any baseline: a (well tuned) cross-entropy baseline would have been expected in the tables. I feel this is an important missing point of this paper. What is called baseline in the abstract is not a baseline. - The conclusions are either very general (e.g. ""When developing deep learning applications, consideration should be given not only to the number of samples required for the desired network performance but also the other costs of acquiring such data, such as annotator time."") or not well supported claims (e.g. ""Using the bootstrap loss cannot fully abolish size-based biases.""). This makes the conclusions sound more like speculation rather than demonstration. Overall, this paper presents an interesting problem but lacks the rigor in the presentation of the proposed solution (notations, explanations) and the experiments are not thorough enough. The dataset used to evaluate seems to not be publicly available, but the authors did not put effort into evaluating against a decent baseline.""",2,1 midl20_50_3,"""This paper tries to describe a new method for brain metastasis segmentation using Network Trained with Robustness to Annotations with Multiple False Negatives. The novel contribution of authors in this paper is: 1. segmentation network with whole -lesion false negative labels. 2. this method preserves performance for high induced FN rates (as much as 50%) As the authors specified in their paper their network can overcome to the error segmentation that introduce to segmentation in two ways: 1. Stochastic Censoring: that some error segmentation is induced to true label segmentation 2. Size-Based Censoring: by censoring the smallest lesions (by volume) across all patients Their model was able to overcome these errors with high performance (98%) This is very well written paper with a new method and the results are eye-catching. The material and methods have been described well. The result section is clear and figures describe their work properly. The discussion section is complete and describes the methods that was used in this paper. The number of cases is not too much and are not from multiple sites. Equation 1, 2 and 3 are somewhat dumb and need more definitions. The original annotations not having been validated among multiple readers for measurement of inter-reader variability. The affect of different scanners on the performance of their model was not assessed. For Goals, fully achieved; i.e. the authors accomplished their results and target associated with the particular goal being rated. For Performance Factors, results met all standards, expectations, and objectives. For Overall Performance, expectations were consistently met and the quality of work overall was very good.""",4,1 midl20_50_4,"""This paper aims to develop a method for training using noisy labels. In order to achieve this, a new loss function is developed based on entropy regularization. A simulated dataset is generated by randomly censoring lesions to create false negatives. The novel bootstrap loss function improves segmentation performance when training on data with false negatives. - The paper is well-structured and the case for the novel loss function is clearly outlined. - There is extensive validation, with examples, on the simulation and also how the results are affected by the size of the training data. - the images in figure 4b are too small to evaluate - maybe these can be enlarged for the next submission. - it is not clear from the title/abstract that the aim is to improve detection rather than segmentation. This paper is interesting, novel, well-structured and the validation, while primarily based on simulation is thorough and well thought through. In addition, the limitations are clearly outlined in the discussion.""",4,1 midl20_51_1,"""This paper seeks to decouple some of the possible sources of noise in MRI acquisition, via a model that treats the noise sources as independent and thus the associated variances as additive. Using this idea and simulated artefacts, the authors present proof of concept results related to the task of image segmentation. The results are of a qualitative nature. The main theme of this paper, that of training neural networks to provide measures of uncertainty associated with candidate image artefacts is innovative. The authors carry this out via simulated ""augmentations"" in k-space (fake image artefacts) that are used to sequentially train 3 different networks (Fig. 1). Should the assumption of independence of the effect of the artefacts on MRI image intensity hold, the strategy to design appropriate loss functions seems appropriate. The idea of applying the simulated artefacts in k-space and then inverse FFT'ing to get simulated MRI artefacts is reasonable. I found this to be a bit of a ""seat of the pants"" approach, where the assumption of independence is not clearly justified in any way, and yet, is the entire premise of the method. The authors state ""While interactions with task uncertainty (task harder to learn with noisier data) or between degradation types (blurring and noise for instance) exist, their modelling would require the learning of new covariance terms and would greatly complexify both model and training procedure."" I think they hit upon the key issues here. How would the effect of image artefacts truly be independent? Surely in even the most simple scenarios, the effects of blurring, RF noise etc. could co-occur. I also found there to be no attempt to present the experimental results with quantitative measures or analyses. The qualitative examples here do not convince me that the predicted task uncertainties, with respect to grey matter segementation, are correct, or that a much simpler image processing method would not do the trick. I think the authors are on to an interesting problem, and the basic problem this paper seeks to solve is both valid and important. However, the assumptions made are not clearly justified though, and aspects of the design are not easy to assess. In essence, the authors assume that the effects of artefacts are by nature decoupled, when the title of the paper suggests that the article will provide a means for doing this. In the formulation (Fig. 1 and associated text) I do understand that 3 separate networks were trained, with the losses coupled and with additional terms introduced to penalize differences in uncertainty predictions, etc. But rather than just saying how these loss functions are heuristically designed it would help to better motivate the details and the design choices. """,2,1 midl20_51_2,"""The authors present good work that aims to automate quality control in MRI images. As a quality measure, the authors use the voxel uncertainty associated with a segmentation task, in this case, grey matter segmentation. The uncertainty of this task can have different sources, and this fact is the main novelty of the article since the authors estimate the uncertainty due to different artefacts. - The employment of cascading teacher-student networks, which allow creating ""surrogate truth"" of the uncertainties per image artifact which reinforce the uncertainty estimation - The adaptation of the framework described by Kendall and Gal to estimate the uncertainties from a multi-task perspective without the employment on any kind of uncertainty label to the aforementioned architecture - The adoption of the cascading teacher-student networks plus the uncertainty framework which allow strongest model regularization due to the employment of uncertainties form the augmentation process as uncertainty labels - Lack of validation - The approach is just tested for one dataset - The assumption of independence of the difference variances - Facilitate understanding of the work - The quantitative results are clearly insufficient and unclear - The description for obtaining the entries is missing In the field of medical image processing, it is common to find small adaptations of methods coming from the field of computer vision, however, this work goes further and proposes novel approaches to these methods that are also of clear clinical interest. """,4,1 midl20_51_3,"""The paper proposes a solution to Quality Control (QC) in MR images with the artifact by modelling the problem as a heteroscedastic uncertainty estimation for different artifacts individually. The proposed method is theoretically grounded, limitations of the work and assumption are clearly stated. Experiments on a simulated and a real-world dataset show the usefulness of the method. + The problem and the proposed method are well motivated. + Decoupling of multiple uncertainties seems like a clever idea. + Results show that when the network is trained on the simulated data it is able to decouple artifacts in real-world data. + Good qualitative results. - A combination of different loss-term (L1, L1 on the gradient, and SSIM) is used for consistency between teacher-student network uncertainties. It would be nice if the effect of each of this loss-term was evaluated separately. - As the predicted uncertainty is heteroscedastic (i.e. input dependent and not task-dependent), it would be nice to see if the learnt uncertainty generalizes to other segmentation tasks without using K-space artifact augmentation during training. - Though qualitative images are good and show the usefulness of the method, it would be good if better quantitative results for each task uncertainty and how it can be used for automatic QC was provided. The paper proposes a novel method for artifact estimation individually for different sources of artifacts. Results are promising and show the effectiveness of the method. Experiment on the real-world data shows the applicability of the method. """,4,1 midl20_51_4,"""This paper outlines a multi-stage student-teacher CNN model for decoupling different sources of image quality issues in MRI data. This is a important topic and as the authors correctly note, image quality is task dependent, therefore modelling uncertainty for a tissue-class segmentation task makes sense for neuroimaging applications. - the evaluation using different types of localized artifacts in figure 3 is convincing. - performing the artifact simulation in k-space makes sense. - the student-teacher network approach is novel, although potentially would benefit from a comparison with a more straight forward approach. - the pixel-wise uncertainty measure may be useful for some tasks. This paper is a little light on validation on 'real-world' artifacts e.g. the quantitation in figure 4 doesn't distinguish between different types of artifacts, which was the main justification for the model design. It is also not clear what the simulation in figure 5 is attempting to illustrate. It would be useful to have more detail on how this was carried out and why it is important included in the methods or the results section. Using aleatoric uncertainty on a segmentation task for image quality control is a novel idea and potentially useful for neuroimaging research studies where segmentation is often a crucial post processing step. The framework outlined here could be expanded to other tasks. One downside to this paper is an absence of a strong validation on real-world artifacts.""",3,1 midl20_52_1,"""This paper extends the SPADE-GAN framework for generating images from masks. In particular, it involves a segmentation network in the conventional two-player game. The segmentation network works as a ""representation"" enhancement for the generated images. Combing the enhanced ""representation"" (which is actually a segmentation map) and the real/generated masks, the discriminator can learn better. 1. The methodology introducing a segmentor to the two-player game is reasonable. I agree with the argument ""If we train a generator from scratch, then it learns a general representation of images. Since we want to use the synthetic images for segmentation task, we want to ensure the images lie within close proximity to the real images in the latent representation, based on which the segmentor makes its decision."" 2. Very detailed description of the method. 3. The method seems promising though improvement of Dice is limited compared to SPADE-GAN. 1. The experiments are insufficient. At lease, I'd like to see how is the quality of the generated images through direct evaluation (segmentation is an indirect evaluation). 2. No analysis. What's the cause of the performance gain towards SPADE-GAN? Because of the introduced segmentor or the new network arch or training ....? 3. No standard deviation provided. 4. How is your global label contribute to the method? Analysis in experimental section? 1. The methodology is reasonable and I agree with their argument.. But the story is a little bit inconsistent. Since the authors emphasize the global label, while I am not convinced that the global information in the generator works or solve the problem they mentioned. 2. The provided experiments are really not sufficient.""",3,1 midl20_52_2,"""The paper proposed to use SPADE to perform conditioned image generation to cope with class imbalance problem. The information the generator conditions includes both the local one which is the segmentation mask and the global one which can be the acquisition center or lesion type. A segmentor is further incorporated to the architecture which claimed to help the downstream segmentation task. 1. The paper is well written and easy to follow 2. I'm glad the authors have provided a lot of training details 3. The review of the related works section is satisfactory 4. The centre ID conditioned generation will be of interest to industry The main weakness I think is the experiment results are not enough to justify the claimed contributions. I do not see how the proposed local-global conditioning allows to mitigate the ""synthesis dilemma"" as claimed in the paper. The global information, e.g. acquisition center is essentially a special case of mask where all pixels inside the mask share the same value. Does the original SPADE method not able to handle this? There are many works out there that claim the GAN generated images are beneficial for the training. The proposed work just used a more recent approach to perform the generation. I don't see any clear evidence that the segmentor is actually contributing to the improvement. I'm giving a weak rejection for now. If the authors can provide concreate evidecne, I'm more than happy to change my rating.""",2,1 midl20_52_3,"""In this work, the authors modified the SPADE framework by adding class-wise information (protocol, vendor, etc.) to synthesize medical images for different modalities/protocols. They add a pre-trained U-net segmentor to constrain the synthesized images in the proximity of real images. The result shows improvement in the segmentation task by adding synthesized images. This work would probably mitigate data scarcity in the medical field by cross-class image synthesis. 1. Very natural migration from a single condition (SPADE) to a dual conditional GAN 2. The U-net segmentor added in the framework helps to maintain the constancy is an interesting idea 3. Clear result presentation and sufficient validation. The method part is not very clearly written. 1. The authors do not show how the segmentor plays in the role of constraint the synthesis. 2. The GAN loss part is not very clearly explained. 3. No explanation on the transformation of a mask to a 1024x8x8 tensor The major contribution of this work is the global information utilized and the U-net segmentation network used in the GAN. It contributes to cross-modality/protocols image synthesis which mitigates the data scarcity greatly. The constraint adds to the image synthesis could be a double-edged sword since it balances between the similarity and variability of image synthesis. This work would be very promising if it shows more variability in image synthesis.""",3,1 midl20_52_4,"""This paper proposed a GAN-based method for segmentation data augmentation. Specifically, the focus on the problem of class imbalance, i.e. the data imbalance of brain data from different medical centers and of skin data of different lesion types. They proposed a GAN based method to generate medical images from segmentation masks as a way of generating synthetic images. The experiments results showed improvement of segmentation with proposed data augmentation. - Good introduction and motivation. - Propose a GAN-based method for data augmentation. - Generally written well. - Adopt state-of-the-art conditioning technique (SPADE). - Experiments on two publicly available dataset (BraTS and ISIC). - Some confusions about the baseline and proposed method, e.g. what is SPADE-GAN? How it is different from proposed method? - Method design is not clearly described and motivated. - Lack ablation study. - Improvements in the results are minimal without statistical significance test. - Confusion in the description of experiment. Especially the description of single class augmentation and balance augmentation is not easy to follow. - Lack some experiments, e.g. what are the segmentation results if you do not use synthetic images at all? The method is interesting but it seems to lack some experiments and clarification. For example, they only compared the results with segmentation using another GAN method, but what are the segmentation results if no data augmentation is used?""",3,1 midl20_53_1,"""This paper uses a variational auto encoder (a U-net) combined with a decoder (a CNN) to address the problem of image segmentation with only weak supervision. The context here is applicability to unlabeled medical image volumes. The notion of ""weak"" here is the assumption of an available prior on segmentation labels, with an otherwise unlabeled training dataset. The method is described in Section 2 and is tested with a naive prior and an MRF prior, on a dataset of 38 3D MRI scans. The main empirical finding is that this form of weak supervision improves performance over a naive baseline and that the MRF prior performs better than the weak one. The main strength is that the method is described clearly enough in Section 2, and that it appears to be an almost direct application of VAEs (Kingma and Welling 2013), with associated design choices. Thus, though methodological novelty is quite modest, the approach seems well motivated. The paper is well written for the most part, with a few minor typos here and there (probable, an extra or missing article here and there, etc.). The results demonstrate a type of proof of concept. The paper could be improved. I'm a bit concerned that the ideas promoted here are rather standard by now, in that VAEs are used for all sorts of problem domains where unlabeled data is not available, and where the basic idea is to evaluate an encoding (in this case a segmentation) based on the error of a decoding (in this case a reconstruction), using the prior. The particular application here illustrates the proof of concept, but there are many assumptions that are not made clear at the outset. What is done is clear enough, but why is not always obvious, and the underlying limitations/assumptions are not adequately discussed. A key issue here is the need for an improved discussion of the notions of a ""good"" prior. The results, though proof of concept are sound, and I think the authors are onto something interesting. I do worry though that this is the type of thing that many in the ML applied to medical imaging community are doing. I worry too that the basic assumptions were not clearly stated up front. This does not appear to a general method, but rather, one that could give plausible results when the assumptions are met. Many decision choices are not fully motivated. Finally, the choice of suitable priors is itself an interesting problem to nail down.""",3,1 midl20_53_2,"""This paper proposes an original variational auto-encoder framework for segmenting images based solely on an atlas prior. The method is indeed completely unsupervised, no segmentation labels are needed for the training images. The paper is clear and the mathematical method well explained (even if more implementation details are needed). Results show the effectiveness of the proposed method but they are not completely convincing and some details need to be clarified. - Authors present an interesting mathematical model to integrate an atlas prior into a variational auto-encoder segmentation network. - The paper is well-written, well-organised and easy to read. - The method is tested on brain images but it could be extended to other organs. - As in classical atlas-based segmentation methods, the registration is a key-point. If the atlas is not correctly registered to the test images, the segmentation can not be accurate. Even if authors mention this pont in the conclusion as future work, this should have been better discussed in the paper. For instance, the baseline EM method, does it use the same registration technique as pre-processing ? - One of the main results of this paper is the shorter computational time with respect to the EM baseline method. Indeed, the accuracies are quite similar. I think that it would be important to add computational times in Table 1 to make this point clearer to the reader. - Some implementation details are not well explained. For instance, authors should briefly explain the Gumbel-softmax relaxation scheme. Furthermore, how is initialised ? Especially for the first 16 subjects ? Why not using an inverse-Wishart prior for ? - In the discussion, authors mention that the proposed method ""opens up to possibility to deploy it on new imaging techniques"". How exactly ? If the atlas prior is not of the same imaging technique as the test or training images, how would that be possible ? Please comment on that. This paper presents a new and interesting method for using an atlas prior in deep learning segmentation. However, some points are not clearly addressed (see Weaknesses). Most importantly, computational times should be added to Table 1 to make results more convincing.""",3,1 midl20_53_3,"""This paper proposes use an auto-encoder to produce segmentations, guided by an atlas prior. The encoder represents the segmentation network, while the decoder reconstructs an image from a segmentation. Two priors are experimented upon, a pixel-independent class-prior, and a pair-wise MRF prior. This is implemented by the use of a KL with a latent distribution q, for computational tractability. The framework allows the use of unpaired images and segmentations, which is useful in practice. - The paper is very clear, well organised, and very well written. Ideas are easy to follow - Variants are proposed, to show the flexibility of the SAE: 2 different priors, and two different ways of obtaining the Atlas (derived from multiple subjects or just one) - The experiments are well validated, compared to pertinent methods, and using multiple iterations (5). - From a methodological standpoint, SAE does not need paired segmentations / images, and is flexible to many priors, which looks promising for future work. The paper presents no major weaknesses. In Figure 2, it is a little difficult to see which boxplots refer to a proposed method, and which refer to an upper/lower baseline or a benchark method. The variablity of regional results ((PAL, AMY,CAU, CT, HIP, ..) could have been discussed a little more. The paper proposes an principled and flexible framework for which has the interesting benefit of relieving from the necessity of paired images/manual segmentations. It is well organised, well written, and easy to follow.""",4,1 midl20_53_4,"""The goal of this work is to alleviate the annotation work of the training data for supervised learning, and make use of the existed segmentation atlas. The authors proposed a variational autoencoder segmentation strategy. It takes the segmentation as the latent feature, and atlas as the feature prior. Using the idea of VAE, the output of the encoder was forced to be close to the prior, and can be decoded into images as similar as the original input of the encoder. Thus the atlas or the prior is of great effect in practice. The method was evaluated on brain MRI scans, and compared to EM, it achieved promising results. There are mainly two strengths: 1. The strategy proposed in this paper can be seen as an atlas-based method. As using networks to predict the segmentation, it takes less time in test stage. 2. The method can make use of the prior knowledge. As shown in this paper, both the probability map and the MRF knowledge were taken into account, and achieved improvement in segmentation accuracy. 1 As described in Section 3.1, all data were preprocessed before training or testing, including affine registration, which also took some time. Hence, the computation time can be more in practice, as discussed in Section 3.5. 2 The prior, such as the probability map, has too great effects on the results. In theory, the prior in training stage should be the prior of the test data, hence the atlas should be selected in the same distribution of the test data, which can be different from the Atlas1 or Atlas2 in Section 3.2. 3. When the prior p(l) is known, the more direct loss would be the MSE loss of the expectation of the output probability and the prior p(l), i.e., ||E(q(lx)-p(l)|| This can be taken as the deep learning-based baseline, compared with the variational version. This work developed a deep learning-based strategy to utilize the prior knowledge for image segmentation. They use the idea of VAE, and force the segmentation to be close to the prior, and further can be decoded into images similar to the original one. Hence the prior is of great effect. In experiments, they use the atlas probability map and the MRF to deduce the prior distribution, and achieved improvements in accuracy. The method is an atlas-based approach with neural network, and given a framework to take the prior knowledge into account. The method is quite interesting, and the paper is well written and easy understanding. However, the novelty of this work is limited. """,3,1 midl20_54_1,"""This paper presents a method to denoise low-dose CT images. In contrast to previously proposed methods, no proprietary projection data is required. Instead, the method operates on the image domain as well as on the spatial frequency domain. The authors show that the combination of networks operating in the image and spatial frequency domain leads to quantitatively better denoising results. Strengths - Its an interesting idea to apply a U-Net not only in the image domain but also in the Fourier domain. Its good that the method does not require sinogram data. - Experiments are well-structured and results are compared with statistical analysis. - The results show that operating in the spatial frequency domain has added value over operating only in the image domain. Weaknesses - There has already been a lot of work on deep learning-based CT image denoising. E.g. using wavelet transforms instead of Fourier transforms pseudo-url or using generative adversarial networks pseudo-url. In this context, the use of a perceptual loss is also not novel pseudo-url. None of these works are mentioned in the paper. - The data set used is quite small and the denoising results are only evaluated using quantitative measures that dont take into account for which clinical application images are made. It would be good to add evaluation using a clinical task, e.g. nodule detection. - The authors write that networks demonstrated exceptional contrast between [..] vessels and liver tissue but this is not quantified in any way. In fact, all results in Fig. 1 look nearly identical, Im not convinced that adding a spatial frequency domain network has much practical value. - Networks now operate in sequence, but it may be more interesting to operate them in parallel so that errors are not propagated. Detailed comments - A method to denoise CT images in a non-image domain has previously been proposed: - Please explain how image intensities were normalized, was this by linear scaling between two HU values? - How was a value of 0.84 selected for alpha? """,3,0 midl20_54_2,"""The paper proposed to denoise low-dose CT in both image and spatial frequency domain with the combination of L1 and MS-SSIM loss. Pros: -Well motivated -The method is easy to follow Cons: -The proposed work only compared vertically with different compositon of the I and F Unet but has not compared horizontally with other LDCT denoising methods. For example: Kang, Eunhee, Junhong Min, and Jong Chul Ye. ""A deep convolutional neural network using directional wavelets for lowdose Xray CT reconstruction.""Medical physics44.10 (2017): e360-e375., which also proposed to denoise in the frequency domain -Why K was set to 2x10^6 to overweight MS-SSIM loss for the frequency domain network?""",3,0 midl20_54_3,"""The authors propose to perform a dual U-net in both frequency and image domain to denoise low-dose CT images. Although the methodology is sound, I disagree with the authors in that ""re fining the spatial frequency of the CT image improves low-dose reconstructions when used in conjunction with an image-domain network"". PSNR improvement over a simple image-based U-net is marginal (+ 0.3 dB) especially when considered the additional complexity. I am therefore sorry to recommend rejection. """,1,0 midl20_54_4,"""This paper tests the hypothesis that a dual-domain cascade of U-nets outperforms single-domain cascades. The results suggest that this is the case. The paper is straight forward, well-structured and the aims, methods and results and discussion are interesting, informative and clearly presented. Minor comments: url Link to Ronnebergers unet paper is broken. """,4,0 midl20_55_1,"""This paper conducts an exhaustive set of experiments on three different segmentation loss functions including weighted and unweighted variants, for the task of micro-aneurysm segmentation. The interesting bit is that the authors report findings going against what one would expect: loss functions designed for handling class imbalance largely underperform the standard cross-entropy loss. - I believe in papers that take experimentation seriously and report results that make one re-consider the universal usefulness of widely accepted strategies, in this case for handling class-imbalance, maybe one should not take for granted that using focal loss or tuning the class weights of a cross-entropy loss will always lead to better results than using a simple CE baseline. - The paper contains experiments training on E-ophtha and testing on both E-ophtha and a second external dataset, which is something everybody should do but few researchers do (although I have my doubts on using Messidor - see below). - There is a very large number of experiments that seem to be doing hyperparameter optimization in a rigorous manner, and this process is described with detail. - The provided discussion is quite rich, and not just an ""I need to fill one paragraph with generic re-statement of the results""-like discussion. - Obviously there is not a big deal of novelty in this paper (no new idea is presented), but if it is a considered as a validation paper I would be ok with that. - Results are presented in a somehow hard to digest manner, and would benefit a lot from more tables. Specially, the FROC curves are ok, but it is very hard to read the actual numbers out of Figure 1's legend. It would have been much better to have a separate table with the Area under the FROC and AP. - I am not very convinced about using Messidor to assess micro-aneurysm detection. I mean, if one takes only images from Messidor that have Diabetic Retinopathy grade 1, then it is ok, you will have micro-aneursyms there. But further grades do not imply the presence of micro-aneurysms. In Messidor-1 you can have grade 2 by having hemorrhages or grade 3 by having neo-vascularization, and not micro-aneurysms. In addition, I believe we should now be using Messidor-2, that was released several years ago and contains >1700 images with updated grading* * Google released new grades for those images here: pseudo-url Although this paper might probably be penalized by the lack of novelty (and I myself was tempted of choosing weak reject), I believe that if the authors follow some of the above advice to polish it a bit, it could be an interesting piece of experimental/validation research. I have also found in my own work that often baseline loss function outperform other fancy contributions from recent famous papers, and I feel it could be interesting for people to know that we should pay more attention to having a proper baseline before blindly following new ""trends"". In any case, there are several weaknesses to this paper that should be addressed either now, time allowing, or in future submissions of this otherwise interesting work.""",3,1 midl20_55_2,"""The authors present a comparison of different objective functions for the segmentation/detection of micro-aneurysms in retinal images. The micro-aneurysms present only a very small proportion of pixels in the input image, which may have adverse effects on learning. However, none of the objective functions that were tested were able to improve upon the cross-entropy. The authors correctly point out that the large class-imbalance can be challenging in many problems related to segmentation/detection in medical imaging. Many different loss functions have been proposed, and the direct comparison of the performance of each of them in a particular setting is enlightening. - The paper is not very well written. The long and incoherent paragraphs, make it very hard to read. Please consider separating concepts into different paragraphs. - The choice of evaluation metrics is confusing. I don't think AUC on image level is appropriate to validate a detection/segmentation problem (also see point below). Dice or FROC-score are probably more appropriate, but I'm missing the results. The authors mention FL achieves better results for pixel segmentation, but based on what metric? - The contribution of using the segmentation method as image level classifier is questionable. Many methods for DR classification have already been developed, and achieve expert-level performance. The classification of DR depends not just on micro aneurysms, but also on presence of hemorrhages, bright lesions, cotton wool spots. I do believe there is value in the type of comparison of different objective functions as performed in this paper. However, the presentation and experimental setup are of insufficient quality. Also, the results are not clearly communicated, it is often unclear to which specific results the authors refer.""",2,1 midl20_55_3,"""The authors face the problem of detecting retinal microaneurysms (MA) using two different approaches: (1) segmentation of the damaged area and (2) image classification. They train a residual U-net to segment the images. Then, using a specific threshold for the output probability maps, they infer the image level classification. The imbalance of the MA pixels impedes obtaining accurate results in their detection. The aim of this work is to evaluate six different loss functions to train the network under the same conditions and determine which is the most suitable one to process unbalanced data. The authors train a well known deep neural network architecture (residual U-net) using publicly available data and known loss functions, which makes the whole work easy to reproduce and benchmark against other approaches. Additionally, they test their method in two kinds of datasets: (1) an independent subset of the dataset used for the training and (2) a completely independent set of images. This supports more objective conclusions and it makes possible to evaluate how general this methodology can be. The authors aim to evaluate the performance of weighted cost functions when unbalanced data is processed. Contrary to what is claimed when weighted cost functions are proposed, they come up with the common cross-entropy and focal loss being the ones that provide better results. However, for the training, during the data augmentation, they rebalance the data by including those original crops and five augmentations of them whenever the crop contains MA pixels. I would say that this is the reason why probably the weighted cost functions do not improve the accuracy results. Indeed, the authors cite the work of Sudre et al., 2017, which is similar to this one, but in their case, they avoid any data augmentation to preserve the imbalance between classes so the conclusions are different. Another very common approach for this kind of situation, also propose by Ronneberger et al., 2015, is to define a weight map or a sampling pdf over the pixels in the image to ponderate the loss function according to each pixel and increase the weight of the unbalanced ones (pixel-wise weight loss). Is there any specific reason why the authors did not try this approach? The authors provide an extended evaluation of the method and a deep discussion of the results obtained. However, I think their conclusions could be affected (biased) by the distribution of the training data.""",3,1 midl20_56_1,"""The paper proposes a recurrent multi-scale architecture for motion prediction in free-breathing MRIs. * The paper says: ""We split each volunteer dataset in 60/20/20 for training, validation and testing, respectively."" It appears that each of the 12 volunteer's images was split in ""60/20/20"" along the time-axis and included in each of the training, validation and test sets. I don't think this is the right way to split the data. The training / test / validation sets should contain images from different subjects. With the current data split, I don't think the results can be trusted. * Also, several things in the description of the method are unclear to me: - what is the difference between 'displacement fields', 'motion fields' and 'motion labels'. - ""To that end, the ranges of values for each vectorial component, i.e. axes x and y, are quantized into b bins according to the data distribution."" Which data distribution?""",1,0 midl20_56_2,"""The paper presents encoder-decoder architecture to predict motion for liver MRI. First, the authors generate displacement field between pair of images using (unknown) registration framework, then encode this displacement field into label, and generate a codebook between the label and a quantized vectorial components of the displacement field. Secondly, the authors train the network (decoder + LSTM) to predict the motion label, and finally use the codebook to recover motion field. the method is validated using 50 MRI scans (2D) coming from 15 volunteers, the vessel tracking error is given for the presented method, and two other relevant methods, showing improvement accuracy for the presented method. Pros: - (real time?) motion estimation for mri guided therapy is really emerging problem, and so the presented approach is interesting contribution cons - the approach consists of several steps, while it is not really clear whether they are needed. - Would be possible to train encoder-decoder to predict directly motion from the sequence? - What extra quantization add to overall accuracy? since this is 2D(?) acquisition, and the problem described is the breathing motion, is there any issue with out-of-plane motion? Could this explain rather large error at the end of sequence? It is also not clear whether this acquisition is 2d or 3d. Page 1 says that the registration is done between images to produce 2d motion field, then the data description (Page 3) says pixel spacing, and slice thickness. Is 3D MRI and split into 2d slices? testing/validating - 50 MRI scans from 12 volunteers. Were the same volunteer scans used for training and testing? - the authors wrote that the results are significantly better, no test, p-value given - what is vessel tracking error? - how many landmarks were used? More general problem to consider: What registration between pair of images was used? There has been a bit of research done on (both MRI and CT) liver motion estimation using discontinuous registration (""A locally adaptive regularization based on anisotropic diffusion for deformable image registration of sliding organs."" IEEE transactions on medical imaging 32.11 (2013): 2114-2126. ""GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications."" Journal of Medical Imaging 5.2 (2018): 024001.) """,2,0 midl20_56_3,"""Summary: The authors propose a multi-scale encoder-decoder architecture to predict breathing induced organ 2D deformation in future frames. They train and test on just 12 MR sequences of unknown origin. In the evaluation, they compare their method with two other methods and show that their method performs best. For me, a few important explanations and experiments are missing in this paper. Overall, I think the authors present interesting ideas to predict deformation in future images. The method is still at the beginning of its development and there are a lot of things to work on. However, I think it might be helpful to present this work at MIDL and discuss further developments. Pros: The authors deal with the difficult question of motion prediction in future images. To reduce the search space of possible motions, first, they analyse the motion in the training sequence and quantized them into b bins and thereby convert the regression task into a classification task. I think this is an interesting way to handle this difficult task. The paper is well written and mostly easy to follow. Open Questions: What is Q? Which image registration method is used to align the images? How good is this method? Are the authors *introducing* or *using* the weighted cross-entropy loss function? Where are the data from? What kind of manual annotation are available? (How many landmarks on which positions?) How good is the mean landmark location error if the identity is used as the deformation field? If I understand correctly, one training per patient is needed. So for a clinical application, you first have to acquire a sequence of a patient to train the network. Afterward, the trained network can be applied during the treatment. How long does the training take? Are results worse when there is a longer break between training and inference? What is the runtime of the method during inference? Cons: The authors dont stick to the 3-pages limit. After reading the paper, I still have open questions that should be answered in the paper. The mentioned related work is quite old (from 2002,2012,2013). I would assume that in the last past seven years, people have worked on this topic as well. If I understand correctly, no regularisation of the deformation field is used. Does the method generate smooth deformation fields without foldings? An analysis of the volume changes and foldings is missing. Are the authors *introducing* or *using* the weighted cross-entropy loss function? In Figure 3, the time axis doesnt have a unit. """,3,0 midl20_56_4,"""As the manuscript title suggests, the authors propose applying an encoder-decoder architecture originally introduced for motion dynamics learning in computer vision context to motion prediction in 2D(+t) liver MR imaging. Following the description in the manuscript, in contrast to the corresponding CVPR 2017 publication, the authors introduce a multiscale block to extract feature from three different spatial scales. The actual temporal prediction is performed by a convolutional LSTM. The short paper is well structured and an interesting read. Indeed, I would have liked to read more about details about the applied method (eg. the exact motivation and setup of the codebook) but the page limit recommendation for short papers is in some contradiction to this. Overall assessment: The contribution transfers and adapts a CVPR 2017-published methodical paper to the medical imaging domain (here: spatio-temporal MR imaging). Thus: One can argue that the methodical novelty of the contribution is limited; I nevertheless like the paper. Minor aspects: - Fig. 2 and Table 1 seem to be in contradiction (at least in parts): While NCC for Enc-Dec is worse than PCA, the corresponding LM tracking errors show a different picture. This needs some explanation. - The discussion states that the proposed model *significantly* outperforms the other approaches. How was this statistically tested? - The last sentence of the discussion is irritating: If one can identify the vessels and anatomical positions but at erroneously predicted positions, why should this be helpful? """,3,0 midl20_57_1,"""The authors of this paper present the results of their segmentation model when applied to segment CT images with different levels of simulated dose. pro: - The clinical relevance, context, and method are clearly described. cons: - This paper presents no methodological contributions or novelty; - Lacks details on deep learning methods: how was it trained and validated? Just mentioning 'previously trained' without references is insufficient; - The data is insufficiently described. How was the reference standard obtained for example? - Lacks references. Conclusion: Although the experiments that were performed by the authors make sense if they want to understand the robustness of their segmentation model against dose-reduction, there is very little novelty or interest for the broader community in this paper.""",1,0 midl20_57_2,"""This paper evaluates how a deep learning segmentation algorithm performs in CT imaging with synthetically reduced counts (low dose images) across 7 anatomical regions of interest. The paper evaluates the segmentation performance of an algorithm over increasingly reduced CT imaging counts, but does not provide any methodological innovation. The paper does not provide any details as to the deep learning segmentation method used. It is also unclear how the algorithm was trained for this task. Overall, the results are to be expected in that segmentation performance declines as image quality degrades. Presuming this algorithm was trained on clean CT imaging (full count), it is unsurprising that segmentation results generalize poorly on noisy CT test data. Enthusiasm for the results is also limited due to testing on n=5 images. For future work, one possibility might be to train on simulated low-dose imaging and see how well the segmentation performs. """,1,0 midl20_57_3,"""Summary: A pre-trained organ segmentation network based on CNN model is evaluated with different noise and CT dose settings. These variations are simulated by adding structured noise. It is reported that the DICE accuracy is reasonable even when CT dosage is reduced by 30% Strengths: + The question of reducing CT dosage is an important one and the set-up used here with the noise models can be useful. + The conclusion that CT dosage can be reduced by upto 30% is an important one. + The experiments and the plots look convincing Weakness: - Perhaps the authors are not used to submitting to MIDL-like conferences, as the paper lacks some essential components to it. Such as the description of the models/data used, comparison, experimental set-up, citations. I would encourage the authors to investigate this research question further as it is of value to the community. In terms of presenting the work, perhaps reading papers from previous versions of the conference can be a good starting point to help organize the work in a manner that is accessible to the MIDL community. """,1,0 midl20_57_4,"""This abstract presents an investigation into how the performance of deep learning based segmentation suffers in the presence of Poisson noise. Results show that a 50% dose reduction in CT dose led to a 25% reduction in dice coefficient. No strategies for alleviating this reduction in performance were proposed and it is not clearly highlighted where the novelty lies in this work.""",1,0 midl20_58_1,"""This paper proposes to generate artificial fluorescent images, both Fundus Fluorescence Angiography and Fluorescein Leakage, using GAN networks. Although the authors use some traditional concepts, such as saliency map and adversarial network, they developed new approaches to generate background and foreground images and some modifications on the loss function. Some experiments with HRA and MISP dataset were developed with acceptable PSNR and SSIM values. This paper is clear and well-written. Methodology explains properly the way to calculate the saliency map, the conditional adversarial network and the loss function. Maybe, the most relevant contribution is a successful adaptation of previous knowledge and techniques to a specific clinical application. So, needed changes of previous architecture, loss function and other technical issues to fundus retinal images are properly developed and tested in this work. Unfortunately, Isfahan MISP dataset is not available at the moment I try to get access. Furthermore, the HRA dataset has not been published by the authors (as far as I know). So, this issue makes difficult future fair comparison of results. Although PSNR and SSIM are two well-known quantitative metrics of quality I would suggest the authors for future works to consider other metrics (see pseudo-url). Maybe, these quality measures are not the most standard metrics to evaluate generated images using GAN networks. Although technical novelty of this paper is minimal from my point of view, the original contributions (adaptation of architecture and local saliency loss) are interesting ways to solve this specific problem. Maybe, the most important point for my final rating is the experimental results that shows a good behavior of the adversarial network. However, I do not consider such as strong accept, mainly because of the used metrics are only focused on quality image criteria.""",3,1 midl20_58_2,"""The paper applies conditional GAN to translate images from fundus images domain into fundus fluorescence angiography (FFA) images domain. A local saliency loss is proposed to facilitate the learning of small-vessel and fluorescein leakage features. The method has been validated with private dataset and the publicly available Isfahan MISP dataset. 1. The whole method is clearly described and details are explained to show the universality of the proposed method,. 2. The application itself is interesting and experiments have been performed on clinical datasets. 1. The motivation of image translation from fundus images to FFA images is not sufficiently explained. Compared with fundus images, can FFA images provide other more valuable information for physicians? 2. The validation of proposed method is limited. The authors should better validate the proposed local saliency loss to show its effectiveness, since the improved performance may come from the increased complexity of the model. 3. The comparison with other similar deep learning frameworks should be included to show the superiority of the proposed network. This paper presents a deep learning framework for transforming one image modality to the other using conditional GAN. The whole framework is in general well-presented. The topic seems interesting but still needs more evidences to show the motivation for doing this. The methodology contribution should be better validated in the experimental parts. """,3,1 midl20_58_3,"""This paper have two major contributions. One is that local saliency map has been used in GAN loss, the other is the newly introduced loss, a combination of global and local losses. The proposed method outperforms the other two comparison methods. However, the comparison experiments are not enough to prove their claims. 1. Main main strength of this work is the introduction of local saliency map, putting more weights on high-frequency regions. This strategy can efficiently improve the synthesis performance on details. 2. Introduction of the proposed method is very clear. 3. Paper is well written, easy to follow. 1. Since the network architecture is based on (inspired by) previous work (PatchGAN), I think the authors should add experiments to validate the performance with this baseline. 2. Ablation study should be added to validate the effectiveness of the proposed loss function. The proposed method has achieved quite good performance compared to related works. The idea is simple and effective. The paper is well written. Hope the authors can add more comparison experiments to further prove their claims. """,3,1 midl20_59_1,"""This paper proposes modifications on the classical UDA with adversarial by adding a reconstruction loss, which is motivated by the application, histology images, where segmentation output masks and input images show similarities. Then, the classical Cross Entropy in the source domain is replaced by a Dice Loss . The experiments are done on histology images. - The paper is clear and easy to follow. The ideas are straightforwards. - An unsupervised as well as semi -supervised frameworks are explored, with a growing number of training examples. - Experiments are done on the 2 adaptation directions, on histology images. - Figures are nice. - The novelty of the paper compared to many classical UDA works[1,2], i.e. the introduction of the reconstruction loss, isn't made clear enough. - Although clear, the paper could be more concise. - The models using semi-supervision could have been compared to Dong + semi-supervision as well, to compare with another DA method. - Improvement yielded by the reconstruction loss is actually limited - Some tipos (introduction) 1] Dong et al., Unsupervised Domain Adaptation for Automatic Estimation of Cardiothoracic Ratio 2] Tsai et al., Learning to Adapt Structured Output Space for Semantic Segmentation The paper is well written, and is easy to follow. Novelty is limited, and could be better discussed/ clarified. The improvent yield by the reconstruction loss is limited. Further, in the SSDA framework, no comparison has been made with a method actually using a domain adaptation technique.""",3,1 midl20_59_2,"""Paper presents a domain adaptation method for the segmentation of cell segmentation. The motivation of the method is based on the fact that the ground truth labels are domain invariant so the model is trained to produce outputs for the target data that look like those from the source data. The authors run experiments in unsupervised domain adaptation settings and semi-supervised domain adaptation. I particularly liked that the authors compared how their method performs compared to the baselines when different degrees of labeled data is accessible. The motivation is good, The paper is very clear: The loss functions and different component of the model is well defined. The experiments or bi-directional (Domain A to B and B to A). Comparision with a baseline is done Some ablation study is done. -I wish the authors would include more domain adaptation and transfer learning baselines. -Also, it would have been better if more datasets were considered. -There are some typos in the paper but are not major issues. -There is no comparison with other methods on this dataset. As mentioned before, I see this paper to be technically sound and the experiments support the motivation and the hypothesis of the paper. I think this paper would be a good addition to the conference. """,3,1 midl20_59_3,"""To segment cells without labelled data, authors proposed unsupervised deep learning method based on domain adaptation. The proposed method learns to segment instances in the target domain, by learning supervised segmentation in source domain that is regularized with an adversarial loss that keeps the distribution of the segmentation prediction in the target and source domain similar. Additionally, the authors use reconstruction loss to ensure that target predictions spatially correspond to the target images. (1) The method is evaluated on two publicly available cell dataset (KIRC and TNBC) that are interchangeably used as a source or target domain. (2) The method is evaluated in unsupervised and semi-supervised scenario. (1) The references are chaotic, especially those in the second paragraph of the Introduction. There is no explanation of how the papers cited in the manuscript are related to the proposed method, and thus how the authors choose exactly them. The cited papers should lead the reader to the contribution of the manuscript, which is not the case with this manuscript. Furthermore, authors should not use the arXiv version for citation, since all the papers cited in the manuscript were published in the prestige conferences (CVPR, ECCV, ) or journals (TMI). (2) Mainly because of the previous point, it is not clear whether the manuscript has a technical contribution. It seems as the proposed architecture is already being used in CV community, but it is not clear whether there is a paper that used all three losses in the same way as authors do. Some similar architectures can also be found tested on cell dataset [1]. Authors should be clear about their contribution. (3) Did author try to reconstruct not just target domain, but also source domain? (4) There is no comparison with SOTA. The DA-ADV method by Dong et al. presented on MICCAI 2018 is good work on smegmatic segmentation of lung in X-ray images, but not evaluated on cell segmentation. Moreover, it is not explained how this approach is used for instance segmentation. How Dong et al. method differs from their approach without reconstruction loss, i.e. CellSegUDA w/o recons? Did the authors used the original code of Dong et al.? What about the results of other methods (e.g. [1])? Are the results of U-Net (target-trained) on these datasets inline with SOTA methods learned in supervised meaner? (5) Why there is a decrease in the performance from U-Net (source 100% + target 25%) to U-Net (source 100% + target 50%) for more than 4%. I also find unfair writing U-Net (source 100% + target XX%), while CellSegSSDA (XX%). It would be more consistent if writing also CellSegSSDA (source 100% + target XX%). (6) What are the results of authors approach on target domain when trained with all source and all target images that are labelled, i.e. CellSegSSDA (source 100% + target 100%)? I would expect the results to be at least as good as U-Net (target-trained). (7) Comparison of the segmentation results presented in Fig. 4 is difficult. Yellow and blue arrows are sparse and not helpful, mainly due to previous method and following method. (8) Following sentence is misleading: our proposed UDA method, CellSegUDA, outperforms both of a fully-supervised model trained in the source domain, and a baseline UDA model. It is not clear whether it has been evaluated on the target domain. Thus, CellSegUDA is better then U-Net (source-trained) but not U-Net (target-trained). A better formulation could be: our proposed UDA method, CellSegUDA, outperforms a fully-supervised model trained on the source domain and evaluated on the target domain.. [1] Xing et al., Adversarial Domain Adaptation and Pseudo-Labeling for Cross-Modality Microscopy Image Quantification, MICCAI, 2019 The contribution is not clearly explained in the manuscript and there is no clear distinction from the SOTA methods. Because the connection to SOTA is missing, evaluation of the method performance is hard for interpretation. """,2,1 midl20_59_4,"""CellSegUDA is proposed to perform domain adaptation for cell segmentation. Additional data synthesization or data augmentation are not required. The quantitative and qualitative results are promising. The method could be extended to semi-supervised domain adaptation (SSDA). This model can be applied to other cell modalities. Unsupervised domain adaptation was proposed to make the algorithm salable. The paper is easy to follow, with a nice method figure. Both quantitative and qualitative results are provided. Adversarial domain adaptation for cell segmentation is a good application. It is not clear why the weights of lambdas are small (i.e., 0.01) The paper listed comprehensive prior works, but actually did not compare with them except the basic U-Net and DA-ADV. Only very similar domains are evaluated for this method. The results are promising with a good clinical application. The paper is well organized and wrote with easy to read figures. The ablation tests for the % of images are comprehensive. Ground-truth labels for cell segmentation are modeled as domain-invariant.""",3,1 midl20_60_1,"""Summary: Active contour based object detection strategy is transformed into unsupervised/self-supervised learning setting for segmentation tasks. This work proposes to parameterise contour evolution with a convolution neural network and self-supervise the learning with intensity based statistics without requiring any concrete labels. A strategy to incorporate few label to further refinement segmentation is also proposed. Strengths: + Use of Active contour without edges (ACWE) strategy for unsupervised/self-supervised learning is novel. + Further, use of intensity level statistics for self-supervision is an interesting contribution. + The possibility of refining segmentations with few labels is additional advantage + Results are convincing Weakness: - Perhaps due to the limitation in space, the concept of ACWE is not clearly elucidated. As the work is heavily dependent on the ideas from Chan and Vese, 2001, strengthening this discussion with further motivation is recommended - The experiments are demonstrated on simulated data. How realistic are these images and how would the model fare on real data? - No baseline methods are reported to appreciate the reported performance""",4,0 midl20_60_2,"""The authors presented an unsupervised learning approach for segmenting bones in artificial SPECT images. A recurrent neural network is used to produce a binary segmentation. The model is trained using a loss derived from the Chan and Vese active contours model. As such, it does not require manual segmentations for training. Authors also introduced an additional loss to use when ground truth labels are available, to train the model in a semi-supervised way. The experimental evaluation is performed on a series of simulates SPECT images to segment bones, in a sort of ablation study in which they trained the model in an unsupervised way, fine-tuning the model with ground truth labels and in a semi-supervised way. Results indicate that the best results are obtained using the semi-supervised approach. Pros: * Modelling an active contour approach using neural networks is definitely a promising line of research, specially for application in which active contours have proven to be useful (e.g. vessel segmentation in CT scans). Cons: * I am not sure if the proposed approach would be applicable to other problems. In the simulated SPECT images used in the paper it is clear that the background class is definitely black, and that the target class has a mean value higher than that. Then the loss function seems appropriate, because that is the most contrastive statistic between the two classes. In other problems it might be more difficult than that. It would be nice if the authors can at least ellaborate on how to extrapolate the method to other more challenging segmentation problems. Perhaps crafting new features might be a solution, as long as the computation of the features is differentiable. * The paper lacks a comparison between the proposed approach and another simple baseline (e.g. region growing or even Otsu thresholding). Since the results of the unsupervised model are not so accurate (Mode 1 in Table 1), it is definitely necessary to analyze them in the context of other unsupervised segmentation methods. * Using means and stds of DSC does not give us a full picture of the distribution of the DSC values. Please, replace Table 1 by a box plot. * The paper includes some statements that are not supported by references or experiments, or that are quite hard. In my opinion this is probably due to the lack of a more in-depth revision of the text. I would recommend the authors to double check the following sentences: --> The statement ""several months to a(n) year"" is quite relative. Depending on the target problem, segmenting an image might be much easier to do. --> ""Solely rely on the statistics of intensities in a given image"". Most of the segmentation methods are based only on the intensities in the image! I wouldn't pose this as a disadvantage of the method. It would be different if you mention for instance the fact that the image features have to be manually crafted. Questions: * What is the motivation of using a RNN instead of a classical U-Net? U-Nets are know to require not so many training images, which is relevant in the context of pushing towards an unsupervised segmentation approach. * Using a PReLU activation as the final activation function of the network seems quite odd. Could you please elaborate a little bit more about this decision? Did you try using a sigmoid function? Is it related with the fact that the loss function requires to have a binary segmentation to compute the intensity statistics? * Is the loss stable during training? I'd like to see the evolution of the training/validation losses per iteration or epoch. Some other minor comments: * Avoid repetitions in the text (e.g. ""methods"" and ""method"" in line 9 page 1). Statements like ""A great deal"" should be avoided as well. * The use of English can be improved, perhaps with the help of a native English speaker.""",2,0 midl20_60_3,"""This paper describes a method to leverage unlabelled images to improve image segmentation via convolutional neural network. The idea is based on the well-known active contour without edges segmentation method introduced by Chan & Vese, which consists in minimizing an energy such that both the background and the segmented object have homogeneous intensities, and the boundary between them is smooth. The authors train a network with such a loss for all unlabelled images, and a standard segmentation loss for the labelled ones. * The main limitation of this method is that it assumes that the object and respectively the background have consistent intensities, i.e. that they can be approximated by a single intensity value: pseudo-formula (resp. pseudo-formula ). This is a very strong hypothesis that is not discussed at all by the authors. In particular, it rarely holds in medical imaging where structures are more often distinguishable by their shape, their texture or their surrounding but not necessarily by a single and absolute intensity value. This is actually why methods like Chan & Vese have fallen out of fashion in our field. Here this might even be more dramatic if pseudo-formula and pseudo-formula are supposed to represent the reference intensities for all unlabelled images. * The experiments are based only on simulated data, and in particular on one phantom. It is also not clear whether training and validation images are really different. On the method itself: * I find it a bit surprising to define F with both the length and area, only to discard the length on the very next line. While I agree that the two quantities are related, they favor different kind of shapes. Moreover, length is not that difficult to encode as a loss, for instance consider a norm of the gradient of the network output. This is all the more surprising that this has been done in pseudo-formula , see first term of equation 3. * Why not use the Dice coefficient or the cross entropy for the labelled images, which are widely considered to be the standard losses for medical image segmentation? Minor issues: * The images are not very readable, especially Figures 2 and 3. * The statement generating annotated data [..] could take several months to a year to complete seems a bit exaggerated. * The paper could benefit from a proof-reading since there are many typos, for instance - base on -> based on - prposed -> proposed - CovNet -> ConvNet - avaiable -> available pseudo-formula -> pseudo-formula """,2,0 midl20_60_4,"""The authors propose a recurrent CNN architecture with a new loss inspired from the mumford shah / chan-vese functional. Doing so, the network learns to maximize the separation between foreground and background in a fully unsupervised fashion. With few modifications, the approach is also adapted to the supervised case where segmentation labels are available. The authors validate the approach on simulated phantom SPECT data. The paper is technically sound and convincing. The idea of bringing the ACWE formalism to deep learning based segmentation is refreshing and in itself is a sufficient contribution to the field that is worth being communicated to the community. On the negative side, the validation on simulated data is not very impressive. Visual results seem to suggest that foreground to background separation is quite easy for this data with almost uniform black background. There are also too many typos for a 3 page paper (avaiable, prposed,..) One could also wonder how well the supervised ACWE loss compare to conventional segmentation losses. """,4,0 midl20_61_1,"""The manuscript presents a network in the form of an ensemble of 3 parallel DenseNet arms focusing on the gross mass (GM) patches, mass background (MB) patches, and overview (OA) patches individually for the classification into benign or malignant mass. The manuscript lacks necessary details in implementation. The design of the network lacks necessary justification. Results shows marginal improvements compared with each individual arm. One significant problem is the ambiguity in the determination of GM, MB, and OA which is a segmentation problem. Overall the contribution of the manuscript is limited and the quality needs significant improvement. """,1,0 midl20_61_2,"""The authors suggest to use three different types of global summary patches to drive the breast mass classification, rather than using the input image directly. This is sensible. Sometimes extracting weak-label type information or force thresholding some input features (e.g., voxels are forced to 0 for some regions, like what is going on here) may be useful. But the paper needs improvement. What is the patching doing that it may not be captured in the first layer of the network i.e., it patching is averaging and thresholding of voxel values, wont that be capturable in the first few layers of the network (and this also relates to the SPE values of proposal and individual models, see below)? The presentation can be improved. What is the intuition for specifically using these three patch types? Is the output space voxel-wise? It is not clear what we are looking at here. Do we expect the three modalities to behave differently in terms of lower layers i.e., do we need different architectures for each independent modalities (the left blocks in Figure). Are the results reported in Table significantly different? Why is the specificity decreasing but accuracy increasing (anything w.r.t data set or imbalanced classes). """,2,0 midl20_61_3,"""The paper proposes an automatic classification of the masses in digital breast tomosynthesis (DBT) assist radiologists for accurate diagnosis. An End-to-End multi-scale multi-level features fusion Network (EMMFFN) model for breast mass classification using DBT. they extract thress multi-faceted representations of the breast mass (gross mass, overview, and mass background) from the ROIs and feed into the EMMFFN model simultaneously. The performance of the models are promising (AUC 85.09%) but lack details of model parameters, training and comparison results with existing methods. - What best describes the contribution of this paper? Please take the paper type into consideration for the rest of your evaluation. For instance, a strong method paper should not be rejected for limited validation. Similarly, a strong validation paper should not be rejected because of lack of methodological novelty. O methodological development O validation/application paper O both validation/application paper - In 3-5 sentences, describe the key ideas, experiments, and their significance. Multi-modality information from three types of patches is used - gross mass (GM), mass background (MB) patches, and overview (OA) patches on 2D DBT. Performance of each individual modality is compared with a multi-modal model showcasing a superior performance of the fusion. - What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. The proposed EMMFFN method using three improved DensNet121 models was applied to their dataset of DBT images to characterize three types of patches of breast mass,and integrate these models at the feature layer to increase the benign/malignant mass classification performance. The model extracts three types of patches - gross mass (GM), mass background (MB) patches, and overview (OA) patches on 2D DBT mass slices for fusion into the deep learning pipeline for cancer classification. - What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. Please make the comments very concrete based on facts (e.g. list relevant citations if you feel the ideas are not novel) and take the paper type (method or validation paper) into account. Multiple spelling mistakes, missing spaces after commas throughout the paper Paper is not formatted per the author guidelines of MIDL 2020. Fusion of features from different modalities is a quite common technique Specific details of individual Densenet networks is missing Authors change the ratio between the kernel size and stride size of the pooling layers so that the pooled feature map can contain different information but how it is achieved is not well understood. Comparison with other existing approaches is missing - What would you like the authors to address in their rebuttal? (Focus on points that might change your mind.) Formatting of the paper per the guidelines of MIDL-2020 - List any further comments and suggestions for minor improvements or clarifications in the paper. None - Rating (4: Strong Accept, 3: Weak Accept, 2: Weak Reject, 1: Strong Reject) 2 Weak Reject - Justification for rating The proposed methods and improvements are very valuable but needs more experiments and clarifications to be validated. Confidence 5 """,2,0 midl20_61_4,"""The introduction is so long while there is less attention to their work, materials and methods. While there is a large number of patients I am wondering why the authors have not divided their dataset into training and validation sets. It is not clear for me if the images have been annotated by radiologist or by automatic algorithm. The authors have not mentioned what is the imaging technique (while Im assuming its MRI). I dont think this paper in this format is suitable for publication.""",2,0 midl20_62_1,"""In this manuscript authors proposed a deep learning (DL)-based pipeline to automate the pathological assessment of FISH images with respect to HER2 gene amplification testing. Their pipeline detects nuclei and classifies fluorescence signals within each nucleus using CNNs. Pros: -The paper is very well written and planned. -The pipeline design is adequate. -The experiments are well designed. -The motivation and future direction given clearly. Cons: -Using term ""interpretable"" results is not adequate, as the results and generated report are not really interpretable. It can be maybe ""human readable"". -It would be nice to discuss more on how this can be implemented in clinical practices, how much training is needed for practitioners """,3,0 midl20_62_2,"""The clinical problem this paper takes on is interesting and relevant for patient care, but the motivation is unclear. Is there a problem with the current method for evaluating HER2 status that deep learning can solve? The methods of this paper are unclear and it would be impossible to replicate this study from this manuscript. The hypothesis of this paper appears to be that deep learning can determine HER2 status, but it is unclear whether this was supported or refuted by these results. Major comments: The number of patients and number of images from each patient must be stated. The paper refers to training, test, and validation sets, but the number of patients in each set and method by which they were allocated are unclear. Were the sets the same for each task? Are all the patients from the same hospital? The results section consists mostly of methods. There is no methods section. It is unclear how artifacts or overlapping nuclear parts were excluded. It is unclear how this exclusion affected the results. If this exclusion is manual, it calls into question the claim of a fully automated pipeline. How was the ground truth for these images established? The performance metrics are given for each network individually, but it is unclear what the performance of the pipeline is on the overall task of patient HER2 classification. How does this performance compare to the current gold standard? A main claim of this paper is that the pipeline is interpretable. However there is no description of the features used by any of the networks nor any biological insight provided by the networks. An interpretable network allows scrutiny of its classification decisions. It is unclear whether that is possible here. Minor comments: The test in figure 1 is so small that it becomes readable only at 200% size. It seems odd to have so much text in this figure rather than describing the process in the manuscript text and referencing the figure in the text. The magnification and microns-per-pixel of the images should be given, as should the hardware used for digitization. The inclusion of the code via github is good. """,1,0 midl20_62_3,"""The authors describe a new end to end image analysis pipeline they developed to interpret histo-pathological images of breast and gastric cancers. Specifically, a deep convolutional neural network (CNN) first makes automatic the analysis of fluorescence in situ hybridization (FISH) images that test the Human Epidermal growth factor Receptor 2 (HER2) oncogene amplification status. The deep learning pipeline mimics the pathological assessment, and localizes plus classifies the fluorescence signals within each nucleus. Then, it classifies the whole image regarding its HER2 amplification status. This short paper gives a good overview of the pipeline and reads well. The methodology seems to be solid and very flexible. The results are also promising although the proposed pipeline is not compared with another state-of-the-art method. The source code of the pipeline is finally freely available. It therefore believe that this work will be of significant interest at MIDL. """,3,0 midl20_62_4,"""Authors present a machine learning, computer vision pipeline for FISH based HER oncogene detection and quantification in histopath images. The paper is well written and easy to follow. Even if the technical contribution is limited, the main pitch of the paper is application novelty. The authors clearly present this in the paper and do not overclaim technical novelty. The authors only provided performance for individual steps of the pipeline. End-to-end performance analysis should have been included. The authors could have included a few more detail about the algorithm, such as the input size to individual networks, training parameters, etc..""",3,0 midl20_63_1,"""The work was an interesting use of V-Net on a challenging problem. PET-CT is a challenging machine learning task because it has two large resolution images with different anisotropic resolution and so requires quite a few decisions and trade-offs. The paper did a good job of re-using established, validated tools and making the code and data available to allow for reproduction of the results. The validation aspect was generally good but was missing some of the thoroughness expected in medical papers. In particular, the lack of any kind of baseline comparison or lesion detection level metrics made it difficult to appreciate the degree of success the method had. The strengths were clearly described pre-processing and model selection steps. The methods section was sufficiently detailed to recreate their steps independently. The approach comparing 2D and 3D and different fusion methods was also very thorough compared to similar works. The figures were well done and easy to read and interpret. There were a few weaknesses to the paper. Principally very little context was given to their model performance. A DSC of 0.61 could be fantastic or terrible but without knowing the general range of expert human performance and a simple threshold and/or classical computer vision on PET it is difficult to assess how well the model performed. Furthermore from a clinical standpoint, the results are not presented in a physiologically meaningful manner. Did it miss 40% of the lesions? Did it estimate the lesion volume 40% lower than it was? Did it find all tumors but miss all metastases? Without knowing these specifics it would be very difficult to show that such a model would offer any value at all to a clinician. The use of a paired T-test for comparing fusion approaches seemed dubious at best and I would leave it out. The decision to use isotropic sizes despite the data's anisotropic acquisition was potentially not justified and probably hindered 3D performance. The use of publically available data was good, but means there is little understanding of the errors and problems with the ground-truth labels. The use of multiple physicians to provide rough estimates of interreader variability would have massively strengthened this work. The paper was well-written and easy to understand and follow. The steps were well documented and the results would be of some interest to others in the field working with similar 3D and/or multiple-contrast fusion problems. The clinical relevance was unclear and the impact without more robust comparisons is hard to assess. """,3,1 midl20_63_2,"""This work proposes a deep learning approach for automated segmentation of tumors and nodal metastases in head and neck scans. Two approaches are proposed that utilize both CT and PET (early and late fusion) and these are compared to using only CT or PET. Furthermore, 2D and 3D methods are compared. The results demonstrate the fusion approaches outperform single modality. Suprisingly, the 2D method outperforms the 3D method. The comparison of early fusion vs late fusion vs single modality is interesting and provides valuable insight into what features are important for this task. The comparison to 2D vs 3D is also interesting, however, the results are surprising and more insight to this would be beneficial. The main weakness is the lack of novelty of the proposed method. Both fusion methods have been proposed before. It would be interesting to see a more advanced fusion approach - using input channels for early fusion and average masks for late fusion are quite simple solutions (which is not necessary a bad thing, it is possible these are the best solutions). The evaluation and comparison of different methods (including different fusion techniques vs single modality and 2D vs 3d) is thorough and interesting. However, the novelty of the proposed method is very limited. Therefore, I recommend this work to be presented as a poster.""",3,1 midl20_63_3,"""The authors investigate a way to automatically segment oropharynx tumors in the head and neck region to better identify the patients with a worse prognosis. On 202 cases with oropharynx tumors from four centers, a V-Net for segmentation was trained in 2D and 3D. They compare the segmentation results of CT, PET and CT + PET. To prove that the combination of CT and PET-CT scans can improve patient care for patients with head and neck tumors, a large validation study is needed. For this clinical validation, a large number of cases are needed with delineated tumors, which is a very time-consuming process when done manually. The study, therefore, investigated looked for a way to automatically delineate tumor outlines with data from four centers. The paper is nicely written although there are some details that need to be addressed: - For the dataset, the authors explain the cross-validation split. However, they don't mention the split between training & validation. - Could the authors explain how well the manual annotations overlap when a separate CT scan for treatment planning is used. - The authors give averaged DSC for all centers. Could the authors also provide non-averaged DSC to see if the results are equal for all centers or there is one outlier. - In figure 3 there is no visual example of the 'late fusion' method. - The authors mention another paper focusing on head and neck tumors, if comparable, could they provide some comparison to their method and scores. - In the discussion, it would be nice to see how the authors think the DSC could be improved. The authors try to tackle a problem that hasn't been explored a lot with deep learning. They use an existing model with an open-source dataset to train and validate their results. The obtained scores leave some room for improvement but in the study is well setup. """,3,1 midl20_63_4,"""Authors presented a method for segmentation of head neck tumors and nodal metastases from PET-CT images. The basic idea is to use 2D and 3D V-Net, experiments were done on 202 patients' scans, and authors show there is an increase if two modalities are used together when segmenting. Interestingly, authors' 2D approach was slight better than 3D. For radiomics, prediction of tumor growth and for several other clinical imaging perspective, the co-segmentation (or maybe better we call ""joint segmentation"") problem is important. Authors used 2D and 3D neural networks, with some fusion strategies to improve segmentations. One of the strengths of the paper is to have large number of patients evaluated. Using both 2D and 3D networks comparatively is also an application wise incremental addition to the paper. The paper describes itself as a well-validated application; therefore, I will only briefly mention here that techniques that authors are using are not new. --- having ground truths only from CT does not seem a fully feasible approach, it forces the system to learn tumor regions only when boundaries from PET and CT are very close to each other. --- DSC is good, but not enough for a complete comparison of segmentation (and evaluation). A shape mismatch based metric is necessary too (or completely switch into FP and TP volume fractions). --- The literature is not complete, several more recent co-segmentation (and joint segmentation works) are not cited, or authors are not aware of such works. For the completeness of the article, and being fair, those works, at least the state of the art methodologies should be mentioned and compared. For instance, C.Lian et al IEEE TIP 2018, Bagci et al MedIA 2013, Guo et al IEEE 2019 TRPMS, etc....) The paper is in the category of well validated application but this ""well validated"" component is missing in this paper. Some of the primary reasons are the following (summary) -- unjustified claims about fusion strategies -- preparation of ground truths labeling does not correlate with results, or authors fail to explain why. -- it is cross validation study, generalization ability is not known, independent set is not used. -- comparison with state of the art deep nets (particularly in this topic, not like basic u-net or v-net), and pre-deep networks papers are missing. -- evaluations are based on DSC scores only, there is a need for shape scores as well (and full version of DSC). """,2,1 midl20_64_1,"""This paper proposes a supervised deep representation learning system combining metric learning losses and attention mechanisms. The proposed architecture builds on the recently proposed metric learning approach divide and conquer proposed par Sanakoyeu et al. The general idea is to first learn a general embedding based on metric learning loss such as the contrastive loss, then perform unsupervised classification (k-means) in the global embedding space to separate the data into subgroups that are further directed to specific embedding models. The main contribution of this paper is to add an attention mechanism to this architecture and evaluate it on a skin lesion dataset from the ISIC 2019 challenge. The authors compare their method with standard embedding methods based on three metric learning losses, namely the contrastive, triplet and margin losses, as well as to the original divide and conquer algorithm without any attention mechanism. They demonstrate that their method compare favourably to all other methods. The paper is well written, state-of-the art is clear and recent. The main novelty of this paper is to evaluate this newly proposed representation learning as well as improve it by adding the attention mechanism, which is shown to improve performance. Description of the method and experiments lacks details. I have comments regarding the training and testing phase of the whole pipeline depicted on figure 1. I suggest the authors to address these questions to improve the soundness of the proposed methodological contribution. The authors propose an original methodological contribution as well as well-conducted evaluation experiments on the ISIC 2019 skin lesion dataset. I would rate this paper with 'strong accept' if the description of the methodological was clearer""",3,1 midl20_64_2,"""In this paper, the authors propose the addition of an attention-based metric learning approach for medical images with the goal of introducing visual interpretability for medical learning. They use the DivConq approach as in Sanakoyeu et al. that splits the learned embedding space and the data into multiple groups thereby learning independent sets of metric distances over different subspaces. Here the authors extend the DivConq deep metric learning approach for medical imaging. They use the ResNet-50 architecture for their network. The authors provide quantitative evaluation over a public benchmark dataset of skin lesions as well as compare it to DivConq among other methods and show good results. The paper is written clearly. The experiments are described in detail and are sufficiently evaluated. The attention model is not described well and is not motivated properly for this problem. Is S(x_i)$ a composition operation? While incorporating what the authors call the ""attention model"" is a good idea, this dataset is not the best suited for demonstrating this idea. The authors should choose a more challenging dataset that has multiple lesions or a heterogeneity of tumors. Thus if the attention maps are able to successfully capture relevant information in those datasets, that will test the strength of the model. The paper presents an application of the deep attentional model for image clustering and image retrieval. While many of the ideas have been proposed before, the application to the skin lesion dataset is novel and thus justifies a discussion. """,3,1 midl20_64_3,"""The paper presents a novel algorithm by adding attention to the metric learning scenario for medical image analysis. The algorithmic discussion is provided in detail and the results sufficiently showcase the efficacy of the proposed method over other similar methods in this sub-field. The paper overall makes a good contribution to the field. - The proposed algorithm showcases that similar to other scenarios/problems, adding attention to the state-of-the art methods (in this case for metric learning) improves the performance overall. - The results in this regard are good and the method seems to perform better than existing similar methods. - The proposed algorithm seems to have an inherent advantage of not requiring additional processing during test time. - The experimental details are explained clearly, contains all the requirement information and is easy to follow. - The results are shown only for one dataset. It would have been good to see how the proposed method performs on at least two publically available datasets to improve the confidence of reader that the method does work in different scenarios. The paper makes a novel contribution to the sub-field of metric learning for medical image analysis. The results are quite good and improves over the current state-of-the -art for the dataset considered. """,3,1 midl20_64_4,"""The authors have added an attention module to the divide and conquer metric learning, which was published in CVPR 2019. The claim is adding attention can help both metric learning and interpretability. The modified method was implemented to skin lesion image retrieval with performance comparison with other metric learning methods. 1. The idea of adding interpretability to metric learning in medical image analysis is intriguing. 2. The modified method was implemented to the ISIC data. 3. Empirical results show the effectiveness of the modified method. 1. It appears to me that the only difference of the proposed method from the CVPR2019 reference is adding attention modules. Hence, the presented work is incremental with limited novelty. 2. The description of some important experimental setups is vague. For example, after ""combining"" subspaces, how that goes back to full embedding space to improve K-means clustering? Or the full embedding is independent of subspace embedding? After embedding, when evaluating NMI and recall based on K-means clustering as well as image retrieval, was the full embedding space used without referring back to the subspace learners? If that is the case, why the attention maps of subspace learners were checked instead of the full embedding space one? 3. Simply based on NMI and recall evaluation, it does not seem adding attention improves too much over the original divide and conquer implementation. Especially, the authors should provide the standard deviation values from 5 runs. 4. If interpretability is one of the goals adding attention, the qualitative analysis of attention maps should be better discussed. It is not clear to me how the visualized ones show that the attention maps ""learned attentions to variations in"" size, scale, artifacts, etc. 5. It may not be appropriate to simply checking the clustering results with K set to 8, which is actually the number of image categories for ISIC dataset the authors used. The proposed method has limited novelty. It is not clear the selected attention mechanism is the best choice in literature. The presentation is not clear enough. The empirical results are not convincing enough. """,2,1 midl20_65_1,"""There are a lot of grammatically wrong sentences and typos, so it was hard to follow. Theres no conclusion or discussion. The readers may not get the point of this paper without conclusion. Overall, the paper is poorly organized. Figure 1 and 3 were not referenced in the manuscript. Please use distinct colors for different anatomy.""",1,0 midl20_65_2,"""Summary The authors propose to use a V-Net for segmentation of coronary artery calcification (CAC) voxels in 3D chest CT images. Strengths - Calcium scoring is a clinically relevant task. Weaknesses - There have been many calcium scoring papers using deep learning, its unclear what the proposed method adds to those. The quantitative evaluation is very different from common evaluation approaches in this field and it is thus difficult to compare the obtained results to other methods. This could be addressed by evaluating in a public benchmark such as the orCaScore challenge (pseudo-url, pseudo-url). - Authors evaluated their method on a test set of 14 patients, this is very small compared to other papers that have test sets containing hundreds of images (pseudo-url, pseudo-url, see pseudo-url for an overview). - The authors should carefully revise the related works section, many statements about related papers are incorrect. E.g., Lessmann et al. did not combine contrast and non-contrast scans. Santini et al. and Huo et al. did not estimate calcium scores directly but performed segmentation. On the other hand, De Vos et al. (pseudo-url, not cited) did. The authors write that no previous methods have localized deposits within branches, but actually this is quite common. See e.g. the participating methods in pseudo-url. - To address the problem of class imbalance between calcified voxels and background voxels, the authors dilate all lesions in the reference standard. However, this is likely to lead to oversegmentation. - The paper is not well-prepared, all image captions seem to be the same. Detailed comments - Quite some typos and grammatical errors, please check carefully. E.g. coronaires, it is therefore become, Previous the deep learning era, simmetry, etc. - The statement For such modality the most likely intensity of calcium is 130 HU on the Hounsfield scale is incorrect, this is only a threshold. Density values can be much higher. - Fig. 1 is not particularly useful for this paper as it does not show any coronary calcifications. - What do the authors mean with voxelometry? """,1,0 midl20_65_3,"""This paper describes a method for automatic detection and labeling of coronary calcification in CT. The authors use a VNet with anisotropic down- and up-sampling to account for the lower resolution along the z axis that is typical for calcium scoring CT scans. This short paper claims to present a well-validated application of deep learning in medical imaging, but does unfortunately not live up to this claim. The method description is clear, but the data and annotation protocol are unclear, the test set is small (only 14 subjects, even though the amount of coronary calcification per patient is often rather small) and the paper contains many mistakes (the related work section confuses references, e.g., Yang et al. used non-contrast and contrast-enhanced CT scans, not Lessmann et al., who on the other hand predicted the location of the calcification, which the authors claim has not been done before; the captions of Figures 2-4 do not describe the figures, Figures 3 and 4 even have the exact same caption; the results section suddenly mentions numbers for ""aorta"", which has not been mentioned before; etc).""",1,0 midl20_65_4,"""The MIDL 2020 author instructions (pseudo-url) clearly state that ""Short papers are up to 3 pages (excluding references and acknowledgements)"". This requirement is not met by the submission. The paper was submitted as ""well-validated application"", which is questionnable given the empirical validation. The Appendix looks unmotivated and unrelated to the text. The plots in Figure 2 contain JPEG artifacts and fail to communicate how well the prediction performs. Here, a Bland-Altman or similar plot would be better suited.""",1,0 midl20_66_1,"""The paper investigates tensor networks for medical image classification, specifically the use of Matrix Product State (MPS) blocks as an alternative to standard convolutional architectures. In this context, MPS blocks embed input multichannel patches into a vector representation, and so on for every layer until the final output is plugged into a classification (softmax) layer. The use of tensor network is to my knowledge not mainstream in the community. The main technical contribution of the paper seems to be to adapt the framework for 2D images, where capturing the local and global structure is important. The approach is illustrated on two datasets / classification tasks, PCam (presence of tumour tissue) and LIDC (presence of lesion). The proposed LoTeNet is compared to a 1-layer MPS architecture, and to a DenseNet architecture. - The framework investigated here is not mainstream - The paper is overall well-written and well-structured, although it is hard to follow at times (around Eq. 5, the wording is confusing, when introducing pseudo-formula and pseudo-formula ; if I understood correctly the word ""dimension"" is used for several clashing purposes here.) - Overall it is intriguing as an alternative approach to pattern detection / non-linear embedding and I could see other applications for some of the core ideas. Validation is a bit limited. The main advantages of the approach can also be its main drawbacks. For instance the reduction in GPU memory footprint seems to be due to directly embedding patches, rather than computing full feature maps then pooling. This also means that the approach will be more difficult to extend outside of classification, or to more complex classification architectures (not purely feedforward). Also, wouldn't convolutional architectures computed with a stride equal to the kernel size benefit from the same improved footprint? Regarding accuracy, it would be useful to clarify the impact of the number of parameters compared to benchmark architectures. ""the number of parameters in LoTeNet is higher (1M when compared to 120,000 for the other two models)"" The paper is, on the overall, well structured; the approach has some originality and the work is well-suited for MIDL. On the other hand, the validation is a bit limited so that it is difficult to truly judge the actual usefulness/benefit of using tensor networks for medical image classification. What misses most is some insight into how the MPS block works (not necessarily the 2D adaptation which is well-illustrated; but rather what Eq. 1,2 and 5 concretely result in), compared to convolutions or other embeddings, and the reader has to invest a bit of time figuring it out for themselves.""",3,1 midl20_66_2,"""A tensor network which models Matrix Product State (MPS) is presented. This is an efficient approximation of a naive tensor representation which has been used in related fields. The performance is tested on a couple of public datasets: binary classification of metastasis and CT lesion detection. The presented method showed competitive AUROC results with significantly smaller model sizes. 1. Despite the technicality, it is overall nicely written. 2. The reduction of model parameters is a significant gain and has potential practicalities in large images. 3. The ablation studies of the parameters (e.g., bond dimension) are helpful. 1. The squeeze operation is not particularly novel or interesting. Real-NVP was using it for the purpose of splitting the input feature into two partitions (by construction of model), so I am not sure if the intentions are the same. 2. AUROC is the only metric. 3. Please see the ""Questions To Address In The Rebuttal"" section. The overall paper is interesting, although the technical novelty is resembling several related works in the field that I have mentioned. There are several questions I would like to hear back from the authors, and I am willing to raise my score based on the response.""",3,1 midl20_66_3,"""Model parameter reduction has a very crucial need in processing medical images. Tensor networks have achieved a great success in reducing parameters in other machine learning tasks. This paper applies the tensor network method to medical image classification tasks. Different from [1], this paper adopts the local orderlessness concept and designs a multi-layer structure. This adaptation leads to fewer parameters and higher performance in image classification tasks of PCam and LIDC. [1]Efthymiou S, Hidary J, Leichenauer S. TensorNetwork for machine learning[J]. arXiv preprint arXiv:1906.06329, 2019. The motivation of applying tensor networks for model parameter reduction in processing medical images is good. Experimental results have demonstrated the good performance. Overall, this paper is well-organized and well-written. 1. Detailed discussion on the connection to [1] is needed. 2. A table of comparing space complexity could be provided. 3. How to reshape output vectors for a given layer back into an image could be explained more, since the output of (Eq.4) does not satisfy the quantum wave property of (Eq.2) directly. 4. This paper provides only the time cost of LoTeNet. The time cost of other models could also be provided. The idea of applying tensor networks for reducing parameters in medical images is a good try for medical images. The results look good, however, complexity analysis and running time comparison is important to demonstrate the advantages of this paper.""",3,1 midl20_66_4,"""The paper proposes an interesting idea to apply Tensor Networks (Stoudenmire-Schwab, Neurips 2016) to medical image data. The paper reads well and gives a nice explanation of tensor networks how to apply them to image data. The goal of using tensor networks is to learn feature embeddings of high dimensional data with a small(er) number of parameters, the latter is achieved via tensor decompositions. Results show that good performance can be achieved with much less parameters that the state-of-the-art. However, reading the paper raises several questions regarding the soundness of the approach. The paper presents a new idea for analysis of (high)-dimensional medical image data via tensor networks. Parameterizing neural networks via tensor networks may be a good way to reduce the parameter search space and come to efficient architectures. The premise of the paper is that the dimensionality of an image with N pixels and d channels is represented by a vector of dimension d^N, which would mean it consists of 2^10000 scalar values for an image of size 100x100 with 2 channels, this should be N*d instead The rest of the paper builds on this analysis which I believe is not correct, and shines serious doubts on the usefulness/soundness of the approach. What I furthermore feel is missing is a discussion on how the method relates to CNNs. The method (via tensor nets) in itself is not equivariant (important property in medical image analysis), but the authors design the architecture in such a way (via patch-wise processing) that it keeps the structure of the images mostly intact. In essence the method seems to describe a form of strided convolutions with convolution kernels parameterized via a tensor decomposition (see detailed comments). The motivation for the paper is to reduce computational recourses and parameters, but the experiments are not setup to draw strong conclusions with respect to this. Moreover, the patch sizes are only of size 4x4, which isn't very large at all. Although the experiments show that indeed good performance can be achieved with the proposed framework with a small number of parameters, it is unclear if this couldnt also be achieved with regular CNNs with fewer parameters, especially considering the similarities of the proposed work to regular CNNs (see discussion below w.r.t. strided convs). So, a good control/baseline is missing. I recommend reject based on the concerns that I express in this review. I hope I am mistaken in my analysis and overlooked something, but with my current understanding I cannot accept the paper as I think there are some major flaws in the paper (mainly on the dimensionality of W and the need for tensor networks). I would encourage the authors to use the rebuttal period to alleviate my concerns, if possible, and update the paper accordingly.""",1,1 midl20_67_1,"""It is a well written short submission. It offers a single-subject analysis of rare cases for diagnostic purposes in contrast with group analysis. The authors present promising results, but the significance at the end is a bit overstated. The framework offers to learn normative microstructural features via an autoencoder from a TD cohort and uses unsupervised anomaly detection on images of children with CNV (a rare disease). The novelty is on the low side. The image processing of the diffusion images is done using TractSeg. Twenty white matter bundles of interest are reconstructed and Tractometry is run along these, using 20 control points. The performance is compared to classic z-score and PCA analysis. The proposed autoencoder, the novelty of the proposed work, manages to identify 2 outlier subjects, while the other could not. Minor ====== Both Z-score and Mahalanobis distance thresholds cannot ... --> Neither Z-score nor Mahalanobis distance thresholds can.. """,3,0 midl20_67_2,"""Authors propose a tractometry-based approach for anomaly detection. The method is based on an autoencoder. Authors use a pretty unique dataset to test their approach. The article is well written. It is organized logically and it is easy to follow. It is difficult to understand why authors picked such a rare disease case. It is very much contradictory to authors' claim that there is an urgent ""need for a paradigm shift for individual diagnosis"" and then address a rare disease case to show as preliminary results. Do authors know what kind of changes are expected in these patients' brain? What is the rationale behind the success of the proposed approach? Were there expected differences in certain pathways? How can one be convinced that there is an actual difference? Authors might argue that ""that is the point of the study!"". However, it is not! These are preliminary results and it is more important to show that this idea can work. Therefore, it is essential to show a case where there are some underlying hypothesis for the disorder, such as depression so readers would know that ""aha, they can detect changes in suspicious areas which are hard to find otherwise"". It would also benefit this work if authors study data from more conventional protocols and devices; even though, I would expect that the results would be similar. This however would strengthen the impact of the work to show that it can be applied by a wider range of clinics or research groups.""",3,0 midl20_67_3,"""Summary: the paper presents an interesting framework for single-subject analysis. Patients should be seen as 'anomalies' with respect to a normative model. The method is tested on white matter tract profiles with interesting and convincing results (even if the number of observations, especially for patients, is rather small). Remarks: 1- Authors should better explain what the 20 features per tract represent. Is it as in Cousineau et al. (2017) or as in Chamberland et al. (2019) ? Do they represent average FA in the tract profile ? As authors have mentioned in the conclusions, it would also be interesting to inspect the robustness of the results with respect to this hyper-parameter (number of features per tract). This should be considered indeed as future work. 2- How are the anomaly thresholds chosen in Fig. 1 ? Please clarify. 3- In Fig. 3, authors show the R0 profile of a CNV patient which highlights discrepancies in the association tracts. What about the other patients and controls ? Is this R0 profile an actual outlier with respect to the profiles of the controls ? Please clarify.""",4,0 midl20_67_4,"""Summary The manuscript proposes a method (Autoencoder) for anomaly detection based on Tractometry features. Quality The preprocessing of the dMRI data as well as the extraction of the Tractometry features is methodologically sound. Using the reconstruction loss of an Autoencoder to detect anomalies is also a common approach. Moreover, the authors used two other methods (Z-score and PCA) as baseline for their proposed method (Autoencoder). Comparing to more sophisticated models like Variational Autoencoders would have been nice. My main concern is the following: The test set is extremely small (n=3 out of distribution samples). The authors argue that their method is suitable for single-subjects analysis, however, for a proper evaluation it would have been necessary to have a larger test set. For the 3 out of distribution samples two were correctly detected as out of distribution. This raises the question if the results are significant or only noise. A larger test set is needed to answer this question. Ideally the authors would also evaluate their proposed method on a second dataset to show that their method generalizes to different datasets (the authors stated this as future work). Clarity Not all methodological details are clear, e.g. it is not clear how the PCA+Mahalanobis distance was applied. This at least should be a bit clearer. Originality and Significance Using autoencoders for anomaly detection is very well established. But doing this kind of anomaly detection in the field of diffusion MRI and Tractometry is new and certainly valuable for the field. However, a more diffusion MRI focused conference like ISMRM could be a better fit than MIDL which is very Deep Learning focused. Summary This paper could be valuable for the diffusion MRI and Tractometry community if the evaluation would be sound. However, using only n=3 out of distribution test samples the evaluation is not really meaningful and can not answer the question if the proposed method really works. """,1,0 midl20_68_1,"""The paper is well written and interesting to read. The experimental setup is made very clear and results are nicely portrayed, alone the motivation for wbMRI synthesis does not convey completely from the very beginning. I really appreciate the combination of PGAN/StyleGAN and Schlegl's anomaly detection approach. I also really like the study of simulated tumor intensity, radius and the impact on anomaly detection, although these preliminary results are not earth-shattering. Pros: - well written and clearly motivated Cons: - Given the rare nature of cancer in pediatrics, can you comment on the clinical relevance of this field of study? Do you see potential elsewhere? - Cancer regions are only simulated, a set of real cancer testing data would have been nice (is such data available? Again, I think this relates to the clinical relevance) - Anomaly detection: which model did you exactly use for anomaly detection? - DCGAN: I really wonder how you were able to obtain such compelling results using DCGAN. It would be great if you could provide details on the training - Anomaly detection: Do you also have visual results of a less hyper-intense, simulated tumor? - Anomaly detection: Is the accuracy evaluated on a pixel-level, or on a level of connected components? - Where exactly did the StyleGAN2 fail, as the radiologist was still able to correctly identify 70% of the generated images as fake. Minor: - Introduction: How the synthesis of such wbMRI would play together with anomaly detection is not completely obvious from the very beginning, I'd suggest some rephrasing for the introduction.""",4,0 midl20_68_2,"""The paper evaluates pediatric whole MRI generation using 4 pre-existing GAN models. The evaluation consists of qualitative visualization, as well as FID, DFD and radiologist discriminative rate for real/fake images. They also conduct a synthetic anomaly detection task (i.e. imputing the healthy MRI with an artificial anomaly) and show the model being able to identify the inserted artifact. I have some major concerns which I will list: 1) The research question behind this work is not clear to me. In the introduction, the authors state the motivation for this work is to develop a cancer screening tool. However, the experiments do not reflect that. If the research question is how GANs can be used for cancer screening in wbMRI, then they should have evaluated the model on a real anomaly detection task, not a synthetic one. 2) In the synthetic anomaly detection task, it is not clear if the query image is from the training data or a held-out test data. Based on what I infer from the paper, it seems the authors used the whole 90 subjects for training. which implies the query image was from the training data. This impairs the validity of the experiment since the generator could have over-fitted to the training data. I would like to see a held-out dataset consisting of multiple subjects. 3) There is no qualitative or quantitative measure to show if the generative model has overfitted to the training data. One way to show this is to show for every generated image the closest neighbor in the training data. While this is not a quantitative measure, it is a qualitative one. Also what metric to use to find the nearest neighbor could be tricky. You could use the same metric you used in the anomaly detection task. I think the paper needs more experiments to validate the approach and so I would reject it in the current state. """,2,0 midl20_68_3,"""This paper compares several GAN methods in terms of generating paediatric wbMRI images. They compared different GANs with two metrics commonly used in computer vision and a real v.s. fake human test. They also used the generated images for cancer detection task with comparison with a classical method. Pros: - Compare several GAN methods including some very recent ones. - Use metrics to evaluate results including a human test. - Showed convincing qualitative results. Cons: - FID and DFD may not be suitable to evaluate medical images. One key point of medical image synthesis is that synthesised images do not only need to be variant and realistic, but need to be clinically meaningful. - Not clear what is the input of the GANs? Is it a random vector/scalar? Or is it a real medical image? If you want to perform detection by comparing generated images with real images, is it better to use real medical images as input, instead of finding closet neighbour? - What are the data you used? Are they publicly available? - Is the classical method you compared with the state-of-the-art? Or how far it is from the state-of-the-art? It seems that its results are quite poor.""",3,0 midl20_69_1,"""key ideas: In this work the authors present a very interesting application of NLP to classification of neuroradiology reports. Their contribution is to modify and re-tune the state-of-the-art BioBERT language model for the task of classifying radiological descriptions of historical MRI head scans into normal and abnormal as well as several subcategories . The classi cation performance of the proposed model, Automated Labelling using an Attention model for Radiology reports (ALARM), is only marginally inferior to an experienced neuroradiologist for normal/abnormal classi cation. experiments: The experiments are impressive. The data set is large, comprised of 3000 (randomly selected out of 126,556) radiology reports produced by expert neuroradiologists consisting of all adult (> 18 years old) MRI head examinations performed between 2008 and 2019 with 5-10 sentences of image interpretation. The 3000 reports were labelled by a team of neuroradiologists to generate reference standard labels. 2000 reports were independently labelled by two neuroradiologists for the presence or absence of any abnormality. On this coarse dataset the performance is excellent. Another sub/classification different disease groups is made and here the performance is also good. significance: The clinical problem that the paper addresses is very important and many research units would have a direct use of this tool in order to extract clinical data for training or research purposes that is either normal or abnormal or in given sub categories. - the importance of the application - the way the authors modify and re-tune the state-of-the-art BioBERT language model - the size of the dataset that was compiled - the performance on the normal vs abnormal task as well as the subcategory task - The use of a comparison of an experienced neurologist and stroke physician vs. a neuroradiologist is somewhat strange to me; clinically speaking, either I have access to radiologists/neuroradiologists that can describe my scan or I do not. In the hospital setting, even in research, one wouldn't give scans to neurologists to describe them. - One caveat in the experiments is though that as the results a single run on the test set is given. This is common in deep learning applications, btu makes it slightly hard to guess how this would perform if trained slightly differently. The paper addresses a very important clinical issue and employs experiments with a considerable size data set of radiological reports of brain MRI scans that show a performance that is on par with a neuroradiologist.""",4,1 midl20_69_2,"""The paper proposes a method to automatically classify free-text radiology reports. The algorithm is built on top of a pretrained BioBERT model that converts text terms (""tokens"") to high-dimensional representations. As a novelty in this work, an attention module is used to compute a weighted average of the high-dimensional representations of all tokens in the report. This average representation is passed to a 3-layer fully connected network to predict a label. The entire network is trained end-to-end. Several experiments are done on a dataset of 3000 labelled reports. The model is compared to a simplified version (with fixed pre-trained weights of the BioBERT model), to an existing approach word2vec, and to humans. The obtained prediction accuracies are impressive. - Clearly written manuscript. - Relevant and interesting application. - Well-designed and carefully performed evaluation experiments. - Results are good. - Thanks to the attention module, the network is quite interpretable. - The experiments did not evaluate the effect of adding the custom attention module on the performance. They only report in the Method section (3.2) that it led to improved performance, but no results are shown to confirm this. - The class sizes for the labelled data are not reported. This paper presents an interesting and original application, and shows very promising results on a large dataset. The method seems well-designed, and has some incremental novelty. This is a good application paper.""",4,1 midl20_69_3,"""This work presents a method for automated labeling of radiology report. The method used a standard pretrained classifier (BioBERT) which is extended by transformer-based model and a custom attention function. The task is split into two subtasks: binary and granular classification. Method's restuls have significantly improved over reference methods and experts. This work is very important, for training automated medical image classifiers in the future without the need for manually labeling large datasets. The dataset for this work is very useful and authors put a lot of effort into labelling these reports. The validation of the work is well done, results are convincing. Paper is well-written and structured. Examples of results are a valuable addition. Although this work is not using imaging data, this is a future application of the method. It is not completely clear how the granular classification tasks are defined. For example Fazekas is a score system, is the classifier Fazekas normal yes/no, or predicting the exact score? No further weaknesses. Although this work is not using imaging data, is has a very strong connection to it and therefore I find this work highly relevant for MIDL. The validation of the work is well done, results are convincing.""",4,1 midl20_69_4,"""The authors propose to and show how to turn a state-of-the-art NLP model, BioBERT, into a tool to solve a basic, but relevant text classification task for free-text radiology reports written (dictated) for head MRI exams. To this end, from a large collection of reports, a total of 3.000 reports were expert-curated into 2 classes for 2/3 of the cases, and into a multi-hot vector for five classes for the remaining 1.000 cases. The results show an improvement over previous research, competitive with or outperforming a trained human observer on a limited set of test cases. The authors use BioBERT, a NLP tool based on BERT, to turn a report into a richer representation that can be run through a word-level attention mechanism to inspect the words BioBERT attended to most. The result of this attention module is then run through a dense NN for classification, which was also trained by the authors. The paper shows that limited efforts and modest hardware is sufficient to yield a valuable text classifier that even allows a certain level of interpretability, and that can be used to quickly crawl large report databases. The authors justify why they deviate from the proposed way of fine-tuning BERT-based models for classification, by reporting improved performance if not the result on the [CLS] token alone is used for classification, but the full embedded report representation, run through another self-trained attention module. This is a thought that seems to be justified by the success, even though no theoretical explanation or numerical validation is given. The paper is clearly structured, consistent in the writing, and sound in its methodology. It presents a useful development, and convincing results. The evaluation against the closest existing tools (though they are not based on a comparable technology) is augmented by a comparison with a trained human observer, which is not often explicitly done in this research. The promised release of a text data analysis tool based on tSNE adds to the practical usefulness of the work. The for me most significant lack is a clear description how * the ground truth was established; * the human observer performance seen in the comparisons was assessed against this GT. I would have assumed that there is no performance difference between the ground truth (established by trained human observers, after all) and the rating of another long-trained (as the authors point out) human observer. Because there _is_ a strong difference, there must be a reason why, which I would really like to know. The further comments may serve as suggestions for further work. Perhaps it might even be possible to include an implementation of the first point before final submission, if the authors agree. The methodological contribution (word-level attention visualization) is not very strong, as BERT by itself is built to facilitate this type of introspection, and it has been used in many subsequent works. Also, the attention module's output is input to the 3-layer classifier network that does the actual ""judgement"", but is not explained or utilized in the explanation (or e.g. used to derive decision uncertainty, which would be a very simple addition). Also, in my opinion in particular the explanations shown in the two false negative/false positive cases show the major difficulty with some types of explainability mechanisms like the one presented: they might help to elucidate WHY a DNN was wrong, once you know it was, but it does not help to assess IF it was wrong. To achieve this, one way might be to assign not only attention to words, but also a certainty metric, so that the network can be trained to be less certain when it is wrong (compare e.g. Mukhoti/Gal 2018). A slight lack of justification for the particular setup with a new attention module and subsequent classifier network and the unexplained strong difference between human observer and ground truth make me hope that a ""weak accept"" encourages the authors to improve the submission. In this case, and if the model and training setup as well as the data annotation tool will indeed be released, I can imagine that the interest of the community might be high enough to warrant an upgrade to an oral presentation.""",3,1 midl20_70_1,"""The paper proposes a methodology to optimize the 3d k-space trajectory for MRI reconstruction from data sampled below the Nyquist limit. Using soft physical constraints, the sampling trajectory is optimized such as to allow faithful reconstruction in an end-to-end model. The experiments are conducted using brain scans from the human connectome project. * MRI reconstruction from undersampled data is a highly relevant problem. * The building blocks used in the paper (also the data) are well-understood and well-established. * The training is done end-to-end. * The experiments are purely simulation-based; it is not clear how the trajectories perform in practice or even whether they can be implemented at all. * The simulation experiments are performed on small isotropic images of size 80^3 voxels, it remains unclear whether this setting is representative for larger resolutions. * Baseline experiments are missing e.g. a) CS + learned trajectories to verify whether the trajectories are specific to the NN reconstruction b) randomly perturbed trajectories + reconstruction to verify whether the performance of the learned trajectories is any better c) other NN reconstruction algorithms * Machine constraints on the first and second derivative of the k-space trajectory are only imposed in a soft way i.e. (1) via the loss function and (2) via a coarsened trajectory with a smoothing spline on top. It is unclear whether the constraints are met in practice and which of the two mechanisms guarantees that they are met. * Ablation experiments are missing. * The code of the methodology is not shared with the community and remains closed source which makes reproducibility very challenging; the data is public domain, though. * It remains unclear whether the obtained images are diagnostically useful e.g. an analysis of the highest error cases. * The optimisation is highly dependant on the initial trajectory. The empirical evaluation is not sufficiently complete to judge the possible merits of the proposed machinery. In particular, the comparison to already existing methods and proper baselines needs to be improved.""",2,1 midl20_70_2,"""Traditional MRI acquisition methods traverse k-space (Fourier space) in a serial Cartesian manner with trajectories in k-space being straight lines. Special sequences use radial lines, or spiral lines. This work relaxes this notion and permits any trajectory, with the constraint it should be physically possible (as defined by gradient specifications on the MRI scanner). The optimization is done using learning, and the method finds both the optimal trajectory, and the corresponding reconstruction method. This is an exciting idea worth exploring further. The work connects nicely to previous literature on compressed sensing and recent literature on approaches using deep supervised learning. The proposed work include physical constraints such as maximum slew rate of magnetic gradients and upper bounds on the peak currents. This is novel and important. Including the physical limitation of the MRI scanner in the optimization is necessary to ensure clinical relevance. This is only a minor weakness, and more of a question: The paper demonstrates, as a proof-of-concept, that the learning-based design of feasible non-Cartesian 3D trajectories in MR imaging leads to better image reconstruction when compared to the off-the-shelf trajectories. This is all true, but the results seems to be confined to wiggly radial lines. It is not clear if this class of trajectories is enforced by the method, or truly is the way to go for optimal trajectories. The quest for optimal k-space sampling is still an active line of research. If learning can help us with the intuition of what k-space trajectories work better compared to current state-of-the art, this is welcome. As I understand it, this work is novel. """,4,1 midl20_70_3,"""This paper describes a data-driven method to design 3D k-space trajectories under constraints on the gradient amplitude and slew rate. The trajectory is designed jointly with the reconstruction method. The method is tested using human connectome project data. I do not believe the work is significant. I do not believe this work has any real strengths. There are many problems. Note to organizing committee: I think it's a bad idea for you to require 200 characters for this text field. I don't have that much to say. This paper makes a lot of major mistakes. *The papers' main premise is that ""many CS-based methods are not practically implementable on real MRI machines because of the stringent hardware constraints to which random sampling of the k-space does not adhere."" This is a very bold statement because it basically implies that the past ~15 years of image reconstruction research were all a waste of time and focused on infeasible scenarios. Regrettably, the papers' premise is incorrect -- there are many MRI methods that use random sampling and have been practically implemented. This is good for the MRI field because it means that the past ~15 years were not a waste of time. But this is a critical misunderstanding of the literature. *The paper claims that ""to the best of our knowledge, this is the first attempt of data-driven design of feasible 3D trajectories in MRI."" There are existing data-driven design methods that can produce feasible 3D MRI trajectories, including: Haldar JP, Kim D. OEDIPUS: An Experiment Design Framework for Sparsity-Constrained MRI. IEEE Trans Med Imaging. 2019 Jul;38(7):1545-1558. doi: 10.1109/TMI.2019.2896180. Cagla Deniz Bahadir, Adrian V. Dalca, and Mert R. Sabuncu. Learning-based optimization of the under-sampling pattern in mri. In Albert C. S. Chung, James C. Gee, Paul A. Yushkevich, and Siqi Bao, editors, Information Processing in Medical Imaging, pages 780{792, Cham, 2019. Springer International Publishing. *The paper evaluates performance on an unrealistic dataset. The human connectome project does not provide k-space data, so the paper must be simulating artificial k-space data from images. These images will have no phase and no multichannel features which makes them very simple compared to the real case. Simulations that are this unrealistic can be very misleading *The trajectories shown in this paper are very nonsmooth. These may be feasible in simulation, but they probably aren't feasible on real machines because of eddy currents and trajectory calibration problems. *The paper describes 3D stack of stars as ""today's gold standard"". 3D stack of stars is definitely not a gold standard for brain imaging. Wave-CAIPI is an obvious more recent alternative: Bilgic B, Gagoski BA, Cauley SF, et al. Wave-CAIPI for highly accelerated 3D imaging. Magn Reson Med. 2015;73(6):21522162. doi:10.1002/mrm.25347 Polak D, Setsompop K, Cauley SF, et al. Wave-CAIPI for highly accelerated MP-RAGE imaging. Magn Reson Med. 2018;79(1):401406. doi:10.1002/mrm.26649 Kim TH, Bilgic B, Polak D, Setsompop K, Haldar JP. Wave-LORAKS: Combining wave encoding with structured low-rank matrix modeling for more highly accelerated 3D imaging. Magn Reson Med. 2019;81(3):16201633. doi:10.1002/mrm.27511 *The paper claims that previous methods use a maximum a posteriori formulation. This is also incorrect -- very few MRI authors use a Bayesian interpretation of regularization. *The paper claims to demonstrate the true merit of 3D over 2D. The merits of 3D over 2D are classical and well known, and the paper does not offer any new insights. *The paper claims that sampling is non differentiable because it involves rounding. How does bilinear interpolation involve rounding? Bilinear interpolation amounts to convolving the sample positions with an interpolation kernel and then evaluating the values at the sample positions. This is easily differentiable at most sampling locations. *The paper cites (Lauterbur 1973) for 3D stack of stars. Lauterbur's paper does not discuss 3D stack of stars. *The color scheme used for Figure 4 makes it very hard to see things. This paper makes major mistakes and does not make a useful contribution. Even worse, the paper makes misleading statements and fails to disclose that the simulations are unrealistic. This sets a bad example for other researchers.""",1,1 midl20_70_4,"""The authors propose to accelerate MRI acquisition by optimizing sampling trajectories in 3D using neural networks. It is a very original work and the authors provide detailed and convincing analyses. -The manuscript addresses a very relevant problem: acceleration of MRI acquisition -The approach is very pragmatic: the authors try to incorporate best physical constraints of the acquisition -Search for trajectories in 3D opens a meaningful new research direction -The manuscript is very well written -limitations are cleared stated -the proposed algorithm needs to be trained (some deep learning algorithms for MR reconstruction do not) -experiments performed with simulated data -no statistical test to verify assumptions -evaluation in a single dataset Interesting method with potentially strong practical and methodological impact. Substantial analysis of results. Convincing results and good discussion. """,4,1 midl20_71_1,"""This paper investigates machine learning on the UCI chronic kidney disease repository. First, preprocessing and data clearing are briefly described and included as important steps in the results, yet the novelty and innovation of this data cleaning step is unclear. Second, baseline algorithms (SVM, MLP, KNN) are applied to the cleaned and raw data. It is unclear why one would want to machine learn on data with known problems. Hence, the creativity / innovation of this approach is unclear. Overall, the method makes sense, but the contributions are unclear. """,1,0 midl20_71_2,"""1. The paper is well written but I have a concern that the authors contribution is very low. There are many other papers that have used the same data and the same algorithms and reported the same results. 2. There are other criteria like AUC and sensitivity that have not been reported in this short paper. """,2,0 midl20_71_3,"""Authors used three different machine learning classifiers for CKD diagnosis (binary decision). Authors used publicly available UCI data sets (400 sample data with 25 attributes). MLP, KNN, and SVM were used and compared, MLP was decided to give better results, as an outcome of the study. -- the paper does not show any innovation, no technical novelty, the use of known 3 classifiers on a known data set, nothing really is new. -- table 2 does not provide any sensitivity and specificity, accuracy itself is a not a good (and enough) metric to define the success of the algorithm -- Correlation does not mean causation; hence, selected features mayn't be really meaningful -- I am not sure if the paper is within the scope of there MIDL which means ""medical imaging"" with deep learning, where I cant see any imaging but only some clinical variables to be used as features.""",1,0 midl20_71_4,"""In this work, the authors tackle the problem of automatic kidney disease diagnosis using machine learning. The main motivation is the insufficient healthcare coverage in Ethiopia. They present a study on using three standard machine learning algorithms on a public dataset after a feature selection and normalization. Even though the overall aim of the project is remarkable, the authors fail to deliver anything new and hence, I would highly question the scientific impact made in this work! There are numerous studies on exactly the same dataset where the same algorithms were applied! Additionally, the description of the methods is very poor. The feature selection method is not explained, only a name to a python function is given. I cannot find a single word on the parameters of the machine learning methods, for example the MLP architecture, the SVM Kernel or the number of neighbors in KNN. The result that the methods perform better on normalized features is quite expectable and, from my point of view, not a significant contribution. On top of this, the work might be out of scope for the MIDL as neither medical images nor deep learning is part of the work. Hence, I would vote to reject the work in its current form. However, I want to highly encourage the authors to keep on tackling the very important issue of solving the problem of insufficient medical aid coverage using modern technology! """,1,0 midl20_72_1,"""This paper develops a method to optimize sampling patterns and reconstruct multiple image sequences in MRI. The paper makes several approximations to try and obtain a reasonable solution. The method is evaluated in a comparison between MIMO and SIMO, but there are no comparisons against state of the art methods. Using a BRM to evaluate sampling strategies is a creative idea. Validating the method using real k-space data is important. The description is mostly clear. I don't have other things to say, but need to write something to meet the minimum character count requirements. ========================================================================== This paper claims a number of contributions that are not novel and already well known. ========================================================================== *""We first formulate it as a constrained optimization problem and show that finding the optimal sampling strategy for all sequences and the optimal recovery model for such sampling strategy is combinatorial and hence computationally prohibitive."" This is not a novel contribution because it is already well known that sampling optimization is combinatorial. Here is a paper from 25 years ago that makes these same observations: S. J. Reeves and L. P. Heck, ""Selection of observations in signal reconstruction,"" IEEE Transactions on Signal Processing, vol. 43, pp. 788--791, March 1995. *""Our experiments demonstrate that the proposed method outperforms sequence-wise recovery."" This is not a novel observation because there are already many papers that show that multi-sequence reconstruction outperforms sequence-wise reconstruction. The paper has no comparisons against existing state-of-the-art methods for multi-sequence reconstruction like the methods by Gong or Bilgic. ========================================================================== There are some methodological concerns. ========================================================================== *The blind recovery model (BRM) in Section 2.1 is asked to solve a very difficult inverse problem that is much harder than it needs to be. I imagine that the performance of the BRM is *far* worse than the performance of standard reconstruction methods, which makes me question whether the BRM really provides a good indicator of quality. The paper only compares MIMO BRM against SISO BRM, but there are no comparisons against standard non-BRM methods. *""Because BraTS does not provide raw k-space data, we follow common practices (Xiang et al., 2018; Yang et al., 2018) to simulate k-space data."" Unfortunately, it is not possible to generate realistic k-space data like this. Real MRI images have phase and multiple channels. Images obtained by taking the Fourier transform of coil-combined magnitude images are much simpler and much easier to reconstruct than real data. For instance, these images will have zero phase and therefore perfect conjugate symmetry in k-space, which means that half of the samples can be thrown away without compromising reconstruction quality. This is very different from real MRI data. This limitation at least needs to be disclosed so that readers are not misled that the simulations are practically meaingful. ""We found that random sampling works better on real data but worse on simulated data."" Seeing a big difference between real data and simulations is a major red flag when the simulations are unrealistic, and suggests that the simulations are not meaningful. *The paper has access to multi-channel information but does not use it to improve reconstruction. This is suboptimal and there are no comparisons to methods that would make good use of parallel imaging (like the methods by Gong or Bilgic). *""We experiment on both low-pass sampling (Xiang et al., 2018) and random sampling (Yang et al., 2018)."" Why not also include uniform undersampling with autocalibration signal, which is the standard approach in parallel imaging? *It is an oversimplification to assume that imaging time is directly proportional to the number of measured phase encoding lines. Very often, preparation pulses and additional time are required to get an image into an appropriate steady state. This causes additional time that does not change with the number of phase encoding lines. ========================================================================== The results are not very good. ========================================================================== The FLAIR image in Figure 3 is very blurry and is missing clinically-relevant features (e.g., a white matter hyperintensity from the original image is missing in the reconstruction). This is not useful. ========================================================================== I also do not think the paper does a good job of describing the literature. ========================================================================== *""There is a long history of research on how to undersample MR k-space data while maintaining image quality. Lustig et al. (Lustig et al., 2007) first proposed ..."" Undersampling predates Lustig by several decades with origins dating back to at least the 1980s. An early review article is: Z.-P. Liang, F. E. Boada, R. T. Constable, E. M. Haacke, P. C. Lauterbur, M. R. Smith. ""Constrained Reconstruction Methods in MR Imaging,"" Reviews of Magnetic Resonance in Medicine, vol. 4, pp. 67-185, 1992. *The citations to deep learning MRI reconstruction methods are highly incomplete and leave out some of the most visible contributions. There are several recent review articles on deep learning that do a much better job of describing the literature and are worth reading to become more familiar with the state of the field: C. M. Sandino, J. Y. Cheng, F. Chen, M. Mardani, J. M. Pauly and S. S. Vasanawala, ""Compressed Sensing: From Research to Clinical Practice With Deep Neural Networks: Shortening Scan Times for Magnetic Resonance Imaging,"" in IEEE Signal Processing Magazine, vol. 37, no. 1, pp. 117-127, Jan. 2020. F. Knoll et al., ""Deep-Learning Methods for Parallel Magnetic Resonance Imaging Reconstruction: A Survey of the Current Approaches, Trends, and Issues,"" in IEEE Signal Processing Magazine, vol. 37, no. 1, pp. 128-140, Jan. 2020. D. Liang, J. Cheng, Z. Ke and L. Ying, ""Deep Magnetic Resonance Image Reconstruction: Inverse Problems Meet Neural Networks,"" in IEEE Signal Processing Magazine, vol. 37, no. 1, pp. 141-151, Jan. 2020. *""As shown in (Xiang et al., 2018; Huang et al., 2012), there exists a strong correlation between sequences of the same patient, as they share the underlying anatomical structures."" This correlation has been used in much earlier literature. See Leahy R, Yan X. Incorporation of anatomical MR data for improved functional imaging with PET. Information processing in medical imaging 1991. pp 105120. Webb, A.G., Liang, Z.-P., Magin, R.L. and Lauterbur, P.C. (1993), Applications of reduced-encoding MR imaging with generalized-series reconstruction (RIGR). J. Magn. Reson. Imaging, 3: 925-928. doi:10.1002/jmri.1880030622 Haldar, J.P., Hernando, D., Song, S.-K. and Liang, Z.-P. (2008), Anatomically constrained reconstruction from noisy data. Magn. Reson. Med., 59: 810-818. doi:10.1002/mrm.21536 ========================================================================== There are also some notation issues that are not clear. ========================================================================== *The paper should define the meaning of the variable theta. This paper has a number of limitations, but I think these can all be addressed if the authors are responsive to comments. The method itself is creative and can be thought-provoking even if it is not state-of-the-art. But it's necessary to list all of the limitations of the work to avoid making readers think the method is more mature than it really is.""",3,1 midl20_72_2,"""This paper works on a task that reconstructs multiple sequence MR images by jointly optimizing sampling and reconstruction. The original formulation for this task is a combinatorial optimization problem since the sampling pattern space is huge. It is transformed to a simpler formulation that showed effectiveness. I overall like the task, formulation and evaluations. The original combinatorial optimization problem is first transformed to be optimization over candidate samples and further formulated as multiple steps method that first learn a general reconstruction network for multi-sequences and select optimal sampling based on this network, followed by fine tuning. The experiments are sufficient and convincing. Overall, the idea is interesting and results are good. However, there are still some unclear points as discussed below. (1) This proposed approach is based on a total time budget, i.e., T_max. Different total time budget may affect the number of samples in each sequence, and affect overall reconstruction accuracies for all sequences. Does the different settings of T_max affect the sampling strategy? (2) The sampling strategy is learned for each dataset, and tested on the test subset in each dataset. I have concern on generalization ability of this learned sampling for other datasets of same / different organs. This is important considering the real application of this approach in MRI imaging. (3) Please remove some typos, e.g., ""not ony "" on first page. The task of multi-sequence MRI is important, the idea that jointly learns sampling and reconstruction is interesting, and the results are overall good. Overall, this is an interesting work deserves to be accepted.""",4,1 midl20_72_3,"""This paper introduces an imaging acquisition and reconstruction framework for T1, T2 and FLAIR images. The k-space of the three modalities have complementary information. The undersampled image data were jointly used as input to a convolutional neural networks to simultaneously reconstruct high-quality images for the three modality. This method could significantly reduce the scan time in practice. The introduced method unsamples the k-space of three modalities using different strategies. It then uses a MIMO CNN algorithm to simultaneously enhance image quality. This method could significantly reduce the scan time. The performance was evaluated and compared with several other method and the ground truth. The paper is quite self-contained with very convincing method and novelty. To further improve this paper, it is better to show the performance with tumor data, although it was mentioned that the BraTS data was used for training. Moreover, this dataset has already aligned the images to the same space. If there is motion to the images, does this method still work? The joint undersampling of multimodal images is a reasonable approach to reduce scan time. The join reconstruction using MIMO CNN algorithm also has better performance than the standard SISO CNN method. This method could potentially be useful in practice with further valiations.""",4,1 midl20_72_4,"""The paper formulates the problem of multi-sequence MR reconstruction as a MR acquisition-time constrained optimization problem. The authors propose a blind recovery model (BRM) to discover the optimal sampling masks across sequences. Essentially the method consists in training the model using a set of random sampling masks. Then, running prediction across what I assume to be either the train or validation set to select the sampling mask that gives the best results. Finally, the model is fine-tuned using the sampling mask selected in the previous step. The choice of the optimal sampling mask using deep learning is a very important topic, but relatively unexplored. Therefore, I congratulate the authors on their efforts. Long MR acquisition times limit the access of this exam to subjects in need of the exam. Therefore, it is really interesting that the authors included a time constraint to the MR reconstruction mathematical formulation. The authors propose a blind recovery model (BRM), which is well explained in the paper, to tackle this optimization problem. The BRM model seeks to optimize multi-sequence reconstruction while at the same time selecting the proper sampling masks. The BRM does not guarantee the optimality of the results. In fact, it was expected that fine-tuning a model to a specific sampling mask would improve reconstruction. It was also expected that a MIMO model would outperform a SISO model, since this has previously been done. Although I think it is a great idea to formulate the problem of multi-sequence MR reconstruction as a MR acquisition-time constrained optimization, I believe the experiments shown in this paper are not sufficient to convince that BRM models will select the optimal sampling strategy that falls within the time constraint.""",2,1 midl20_73_1,"""The paper presents a well-calibrated uncertainty method for regression tasks taking into account both aleatoric and epistemic uncertainties. The method is validated on four different datasets and three different architectures. The core of their method is to train a learnable scalar parameter to rescale the aleatoric uncertainty. - well-written paper - nice theoretical background on mis-calibrated networks. - extension of Levi et al 2019 - extensive experiment on four different datasets and three different network architectures. - strong claims about robust detection of unreliable predictions and OOD samples, but never shown. - Modest contribution. - Missing Comparisons, for example, Lakshminarayanan et al. 2017. - Lack of in-depth evaluation Lakshminarayanan, B., Pritzel, A. and Blundell, C., 2017. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in neural information processing systems (pp. 6402-6413). I really liked the paper and the topic is definitely relevant, and of high-importance, to Medical Imaging. However, there are some issues have to be fixed before accepting the paper. Most importantly, the comparison with Ensemble Networks, and Lack of in-depth evaluation and discussion. """,2,1 midl20_73_2,"""The paper proposes to correct miscalibrated uncertainty estimates in Deep Learning models, by adjusting the variance in the likelihood model by a scalar factor. The factor is tuned on a hold-out set after a standard training phase. Preliminary experiments are reported on 4 medical imaging datasets.. The paper addresses an important issue in medical imaging. The paper would be more convincing if it focused on showing that the proposed approach is useful in practice, for instance if estimated confidence values correlate better empirically with predictive error, for the medical application of interest. Maybe the experimental results in the paper could be explained in more detail. The paper emphasizes the methodological contribution and positions itself within the Bayesian framework, but this aspect of the work is the most unconvincing. - Ultimately miscalibration is a bit of a misnomer. The uncertainty estimates correctly reflect the predictive error or they don't. It is unclear why changing flawed estimates by a scalar factor would make them reliable. The proposed method will ""hide"" the most obvious flaw: the order of magnitude. I would argue that the incorrect order of magnitude is a useful symptom of the flaws in the Bayesian model. Addressing the symptom is unlikely to fix the root cause. - The Bayesian aspect is emphasized throughout the paper. Why do the authors feel that their work is strongly rooted in Bayesian principles? e.g. use of priors, design and analysis of the model, inference method. - A major flaw in the analysis stems from how aleatoric uncertainty enters the predictive uncertainty on the regressed variable, when it should not. See below. This puts undue emphasis on the aleatoric uncertainty as a source of miscalibration compared to other model misspecifications (or even the inference method). For instance, depending on the task: 1) model bias (even with NNs) e.g., if important predictors are missing in the input x; or due to choosing 1 out of many possible architectures; 2) the i.i.d. assumption in the likelihood model; 3) lack of proper priors and overfitting. This is especially relevant for mainstream black-box models whose epistemic uncertainty can vanish in the large data regime due to a string of poor practices (hence the convenience of adding the aleatoric uncertainty). The main idea in the paper comes down to an ad-hoc correction, but the authors position their work w.r.t. Bayesian uncertainty quantification. From this standpoint, the paper is severely flawed. The work would be more convincing if it focused on strong empirical results, showing that the proposed approach is useful in the medical application of interest, for instance if estimated confidence values correlate better empirically with the predictive error.""",1,1 midl20_73_3,"""The problem of calibrating predictive uncertainties obtained using deep neural networks with Monte Carlo dropout is addressed for regression tasks. A rescaling method involving an optimisation step to find a scaling parameter is proposed. It is evaluated using a modified uncertainty calibration error metric on four datasets, and compared to an auxiliary scaling method. - Addresses an important issue for medical image analysis - Proposes a sigma-scaling method with theoretical foundation that looks fairly straightforward to implement - The method improved calibration on the four datasets and three networks reported I did not find any major weaknesses. The empirical evaluation could have been more extensively described but it is acceptable for a method-focused conference paper. I suggest some minor modifications below. Moving some of the Appendices material into the main text would improve readability.` The paper presents a novel method based on sigma-scaling for addressing the important problem of calibrating regression uncertainty. This problem is particularly relevant in many medical image analysis tasks that need uncertainty measures to inform subsequent processing or decision making. The method could find wide application. Experiments suggest it can work effectively.""",4,1 midl20_73_4,"""This manuscript proposes an optimization framework on top of negative log-likelihood optimization in order to rescale the biased estimation of aleatoric uncertainty. This is an important problem in uncertainty estimation. The presented method is well-evaluated on four different datasets and using several deep architectures and the results show significant improvement over common auxiliary re-scaling. Even though the authors opted to pitch their method around medical applications, the methodological developments are quite general for wild applications. - The paper addresses a relevant and important problem in the application of deep learning on medical imaging. - The text is clear, the diagrams are descriptive, and the paper is well-organized. - The authors support the experimental observations with enough analytical expansions. - The experiments are comprehensive. I see only few minor weaknesses in this manuscript: - The calibration plots are selectively presented. I suggest including the rest of the calibration plots (different datasets and architectures), to the supplementary. - Considering using deep learning and the random process involved in the initialization and optimization, all experiments must be repeated at least 10 times and the standard deviations (or confidence intervals) must be reported in Table 1. The paper addresses an important problem in the community. The text is clear and well-fitted for didactic purposes. The experimental setups are appropriate and the results are significant. In short, I think this adds a lot to the MIDL.""",4,1 midl20_74_1,"""The paper proposes the use of a novel emptiness-constraint together with bounding boxes in weakly supervised CNN for image segmentation. It correctly builds upon previous works and attain a significant improvement over previous methods. The experimental analysis has a limited range, but shows the most important advantages of the proposed approach over previous ones. It is very well written and structured. Concise and thorough. Very convincing reasoning and argumentation. The description of the method is formal and technically sound. The code availability and the good description of the implementation seem to be complete enough for replication of the results by other researchers. The results show improvement over previous methods. The difference of the proposed approach to previous works by Kervadec and colleagues seem to be very small. At the same time, the experiments compare with Deep cut only, leaving unclear how the authors results improve over the most similar previous approaches. The paper brings novelty and shows improved results over previous methods. It is well written, yet concise. Minor issues, such as clarification on the improvement over Kervadec's previous works and a discussion on the impact of the proposed method on image analysis workflow can still be added in rebuttal phase.""",4,1 midl20_74_2,"""The authors propose to learn segmentation of medical images in 2D using a weakly supervised method. This method is based on bounding boxes annotation instead of pixel wise annotation, which are more expensive to obtain. They steer away from classical losses such as cross entropy loss, and methods such as deep cuts, and propose their own loss based on the intuition that bounding boxes are tight around the area of interest and the area outside the bounding box does not contain foreground pixels. Since their constraints are hard to optimise for, they resort to a log-barrier method which allows them to use their loss within a standard gradient descent optimisation technique. The optimisation framework proposed by the paper, in order to deal with losses imposing inequality constraints on the outputs of the network, is interesting. It can be used in other applications beyond what's proposed here. The results shown in the paper are good. It seems that the method is able to deliver a good improvement over other methods (Deep cut). A little more comparisons would have definitely helped, but the results already look relatively close to what one can achieve with full supervision. The paper is pretty hard to understand due to its organisation and the way it is written. I personally had to read it multiple times in order to connect the dots and understand what had been done. It would have been much better to present the idea and the general intuition behind it at the very beginning, without introducing complex terms and details which could have been introduced later. The notation of the various formula and expression in the paper is not very standard and therefore confusing. I have checked most of the math, but I am not 100% sure everything is correct. The authors might want to double check everything and maybe align their notation to the notation used in other works. It seems that the authors optimise for background emptiness, subject to some constraints pushing foreground within the bounding box. Foreground needs to have a minimum area and touch the bounding box boundaries in at least ""w"" points. Figure 2 is supposed to show the second constraint (foreground touches the bounding box in at least w points) but it actually does a terrible job explaining that. Please change figure 2. One of the two proposed emptiness constraints (Eq. 1) prescribes the sum of the predictions outside the bounding box to be smaller-equal zero. The predictions are always positive though (so it can be zero, but not smaller than zero). This is true, unless the authors meant to indicate the ""logits"" (or network outputs before last activation) in the equation. I am unsure the constraint ""Uncertainty inside the box"" is intuitively explainable. Why does the foreground need to touch the bounding box side in at least w points. Constraining the global size relies on a manually supplied parameter epsilon which is decided by the user. It looks a bit arbitrary. Finally, I personally disagree with the statement that pixel-wise segmentation is expensive. Pixel-wise segmentation is not expensive if it is mediated by interactive deep learning learning methods that do most of the work for the user. I believe that whoever still traces the ground truth segmentation without smart annotation tools by marking each individual pixel as background or foreground is just wasting time. Pixel-wise volumetric multi class segmentation can be achieved in seconds using smart annotation tools. I believe the paper has merit, but I found the constraints proposed here heavily based on heuristics and user supplied parameters. The presentation needs to be improved due to being currently unclear. The proposed optimisation framework based on log-barrier method to make the loss optimisable using standard gradient descent technique is a great idea and can be used in multiple problems going beyond those presented in the paper. I regard this paper as borderline. The results show improvement over other approaches and good results relative to fully supervised methods. """,2,1 midl20_74_3,"""The authors propose a method to perform semantic segmentation of anatomical regions of interest from bounding box annotations (weakly supervised labeling). The approach utilizes two constraints: (i) a tightness prior and (ii) a background emptiness constraint. Segmentation results are presented for MRI prostate gland segmentation and MRI brain lesion segmentation. Results demonstrate that this approach (using inexpensive labeling) can achieve segmentation accuracies close to that of fully-supervised semantic segmentation (using expensive labeling). - Motivation is strong from a clinical perspective in that detailed annotations are expensive. - The background literature cited is comprehensive. - Testing on two different MRI datasets (for two different tasks: prostate gland segmentation and brain lesion segmentation) shows that the approach works well on different types of segmentation tasks. - Evaluation is compared to fully-supervised semantic segmentation and DeepCut methods. - Results show performance approaching that of fully-supervised semantic segmentation. - Preliminary results show that the method is relatively robust to errors in the bounding box segmentation. - The paper is well written and design choices are clearly explained. - Testing on n=10 subjects for prostate and n=26 for brain, is limited. Cross-validation studies would be more rigorous. - Contribution of the global size constraint (Sec. 3.3) is not quantified in the results. I think this is a strong paper that would be of interest to the MIDL community. The training loss function constraints are impactful, mostly novel contributions for medical image analysis. These initial results are encouraging and show segmentation performance approaching that of fully-supervised semantic segmentation while using weakly annotated (and therefore much less expensive labeling) data.""",4,1 midl20_74_4,"""This paper introduces several new losses based on bounding box for weakly suppervised learning. Experiments on two medical image segmentation tasks show that the performance with the new loss approaches the suppervised version of the same model. Bounding-box a much cheaper labelling method than manually segmenting the whole target, especially for medical image, so it is valuable to study using bounding box to train segmentation models. One weakness of this paper is that the baseline model is relatively weak. The major strength of this paper is that it develops a novel way of utilizing bounding box as a weak supervision for medical image segmentation, and the two new losses introduced inside and outside the box looks reasonable. Experiments on two dataset show improvement over a previously published method DeepCut. The paper is well organized and written. 1. The baseline method DeepCut is relatively old, especially when considering that deep learning is a fast-developing area. A series methods, including DeepCut and newer ones can be found at: pseudo-url. I wonder if DeepCut is the state-of-the-art. 2. Regarding the main experimental results. The DSC on PROMISE12 is fairly good but its improvement over DeepCut is small, and I wonder if this improvement is statistically significant. On the other hand, there is a big improvement of DSC over DeepCut on the ATLAS dataset, and the gap to the suppervised version is small, but the final DSC of 0.474 is too low to be meaningful in practice. Even the full supervision approach only achieved a DSC of 0.489. I wonder whats the state-of-the-art result on this dataset. Its valuable to study weak suppression by bounding box, especially in medical image segmentation tasks, because the manual pixel-level labeling is very expensive. This paper proposes new losses both inside and outside the bounding box, and reformulate the objective function to make it feasible for backpropagation optimization. Experiments shows that the losses are workable, but I refrain from giving strong accept because of the relatively weak baseline. """,3,1 midl20_75_1,"""A 2D U-NET based approach in combination with adjacent slice fusion is proposed for MRI spine segmentation. The method is compared to a pure 2D U-NET and a 3D U-Net, with the proposed method outperforming both when evaluated on the Spineseg T2W data set. A DICE score, precision and recall all of about 90% is reached with the method. The paper has the clear proposition to make use of the 3D context for spine segmentation while avoiding the runtime performance cost of a 3D U-NET. The approach is evaluated on a public data set and compared to 2 alternative approaches (2D and 3D U-NET). The motivation is not completely clear. The authors state quite generally, that 3D CNNs suffer from high computational and memory costs without relating it to respective boundary conditions for a given clinical workflow or to known problems. Later on the computational and memory costs are not detailed out for the 3 approaches, it is only generally stated that their approach is 3 times faster than the 3D U-Net approach. The concept of the attention mechanism is not completely clear. It is also somewhat misleading, since the attention mechanism is a post-processing step, fusing the 2D results of the slice at hand with the result of the previous and following slice. It is not, as one could assume, a mechanism that steers a neural network in a way giving more weight to an attention area or the like. The attention mechanism is not really described in detail and seems to be some kind of a slice averaging appoach. What exactly is the 'attention generation' step in fig. 3? The paper is interesting and sound enough to be accepted, but there are too many unclarities and weak points to justify a strong accept. Especially the attention module needs to be explained in more detail and the authors should reason why their approach outperforms a 3D U-NET. """,3,1 midl20_75_2,"""The paper describes a new segmentation method to segment vertebrae in spine MRIs. The method was validated on the SpinesegT2W dataset (seems to be an internal closed source dataset) with 190 sagittal T2-Weighted MRIs. For comparison the paper compared the performance of the proposed method against several other methods namely the original U-Net and 3D U-Net. - The idea of using a 2D CNN coupled with an inter-slice refinement step to help 3D segmentation is novel and interesting - The ISA bit of the paper should be easily applicable to other segmentation methods working on 3D volumes - Better performance than 3D U-Net - Validated on a large dataset - The structure of the paper can be better - Some bits of the paper can be a bit clearer Detailed comments: - Section 3.1. The following sentence is a bit unclear ""For better extract of intra-slice feature, we redesign the structure of convolution blocks of classic U-Net, proposed stacked Dense U-Net structure for rough segmentation based on intra-slice information."". - Section 3.2. ""For segmentation tasks, attention is usually achieved by creating masks that represent an informative region on feature maps, so as to highlight the most salient regions and suppress irrelevant regions."". How are masks generated? Are they thresholded or are they the raw output slices from SAU-Net? - Section 4.1. ""The dataset contains 195 and 20 sagittal T2-weighted spine MR images of patients ..."". What does 195 and 20 here refer to? - Section 4.4. it would be interesting to see the effects of using ISA on the 2D U-Net and 3D U-Net. - Section 2.2. Missing space between features and for in the sentences ""... the most discriminant features.For medical image segmentation"" - Section 3. Figure1 -> Figure 1. Fix throughout paper. - Acknowledgements should not be a numbered section. The paper presents a method to easily improve 3D segmentation that should be easily applicable to other methods. The proposed method also presented good performance on multiple metrics compared to other segmentation methods. The paper is well validated but there are minor weaknesses that need to be addressed.""",3,1 midl20_75_3,"""This paper presents a method for segmentation of the vertebrae in lumbar spine MRI scans. The method is based on a dense 2D U-net and an additional refinement step based on an attention mechanism. This refinement step takes information from the previous and the following slice into account to refine the 2D segmentation mask. The method was evaluated with cross validation on a set of approximately 200 MRI scans and compared with a 2D and a 3D U-net. - Vertebra segmentation in MRI is a challenging task, many previous publications focused on segmentation of only the vertebral bodies - The presented method is not very complex - The method is evaluated on a sizable dataset - The introduction mentions several vertebra segmentation methods, but in the experiments the authors compare their method not with any of these state-of-the-art methods but with a standard 2D and 3D U-net - The improvement in segmentation performance even with these non-optimal baselines is rather small - There is no evaluation on a public dataset, such as pseudo-url - The description is not always clear, for example, it is not clear what the ""stacked"" part of the ""stacked dense U-net"" is While this is a very relevant application and overall a sound method that combines a 2D segmentation network with a mechanism for incorporating information from neighboring slices, the evaluation of this method needs to be improved. A comparison with a state-of-the-art method is missing and the used metrics are not ideal.""",2,1 midl20_76_1,"""The work experiments with a flat prior in functional space for multilabel classification, in a Deep Learning setting. The network f_theta predicts, given the input x, the concentration parameters alpha of a Dirichlet prior on the vector of class probabilities. After marginalizing over class probabilities this yields the standard softmax on label probabilities (up to reparametrization pseudo-formula <-> pseudo-formula ). The functional prior is built from the Dirichlet distribution with pseudo-formula ; and evaluated on a measurement set in the data space that suitably accounts for in- and out-of-distribution points. I think this abstract is suitable to be presented at MIDL. The approach is not necessarily very novel in practice, but the functional space perspective is still uncommon and interesting (including the soft-constraints induced by a prior that looks uninformative at first glance). Also, the functional viewpoint as a way to incorporate Bayesian priors in neural networks is a promising direction. There are a few typos that can be corrected. The choice of validation is suitable and reasonably executed given the format. The entropy/ies are mentioned at the very end but not reported? There are a few claims that do not necessarily serve the argument: ""uncertainty outputs, which can increase patient safety [...]"" -> maybe not necessary to go there unless you have results? ""Our method is also significantly less computationally expensive as compared to Bayesian or frequentist approaches"" -> At most it is orthogonal to being Bayesian or frequentist. The work is quite clearly using the Bayesian toolbox, including evidence lower bound computations (as per the title(!), it is an instance of variational inference).""",4,0 midl20_76_2,"""The paper proposes to place a Bayesian prior on the distribution generated by a deep network instead of the traditional approach in Bayesian deep learning, where the prior is placed on the weights. The choice of short paper makes the presentation extremely dense, and it is thus very hard to thoroughly evaluate the paper. This goes for both the mathematical developments, and for the correctness of statements such as ""the regularization caused by these prior is not able to calibrate the network output, nor do these priors explicitly make the model under-confident on the OOD samples."". However, setting the readability aside, I believe the idea pursued in the paper - to define the prior on the output distribution - has merits. The authors develop the variational framework for training the network, define an evaluation criteria (the ECE measure), and validate the model experimentally on skin lesion classification. Overall, though the paper would certainly benefit from more pages to elaborate on all aspects of the model, and though I cannot fully validate its correctness, I find the paper has potential merit and would be an interesting read for the MIDL audience.""",3,0 midl20_76_3,"""The key idea in the paper is to use functional prior that is completely uncertain about prediction of any class. To achieve this , the idea of introducing Dirichlet distribution after neural network is used from Evidential Deep Learning (EDL) paper. From table 1, it is clear that ECE is much lower for the proposed method. However, I have following concerns: 1. It is not clear why calibration is reported and not simple measures of uncertainty like variance or entropy? Also, I would be convinced that the variance would increase for out of distribution test samples because you used a prior that enforced uncertainty of all labels. Now, it is difficult to connect use of prior and improvement in ECE. 2. What is the experimental setup? Did you train on some other dataset and test on skin lesion dataset? 3. Last line of section 1: ""it can distinguish distributional versus data uncertainties"". How? Overall, the idea is fine. """,3,0 midl20_76_4,"""The paper studies the predictive uncertainty estimation for medical diagnosis. Basically, they move from the weight space view to the function space view and run their proposed inference method directly on functions. The main idea is that the DNNs can be better calibrated due to the direct modulation of functional outputs. They also claim that the OoD example will be better detected this way. While the paper is in principle interesting, I have the following concerns. Experiments are limited and results do not substantiate the claims. No results reported for OoD detection. So, how could I just judge the performance in this regard? They also say that they can better distinguish the distributional uncertainty from data uncertainty, in comparison to Sensoy et al. (2018). But, no results... Only Table 1 reports on classification accuracy on skin lesion classification and the calibration error. However, even this comparison lacks proper evaluation. The drop in ECE seems significant but there is mention of any statistical test, despite the bold statement of 'significantly lower ECE'. Also the computational efficiency comparison between MCDO and DNN ensemble is a bit careless. How many samples were drawn for MCDO? What is the ensemble size? How can one interpret 25x or 5x gain under this scenario? I have a few reservation in regards to writing, too. References float freely in text. Also, additional references are required since some of those sentences are derived/learned from earlier work. For example, ""Furthermore, the typical classification setting of training the softmax output layer using cross-entropy loss typically gives over-confident (low entropy) class probability mass distributions, even when there is a classification error [HERE]."" This is especially concerning for training on medical datasets that are often relatively smaller and suffer from severe class imbalance -(- Esteva et al. (2017) -)-. In other words, the popular deep learning models give poorly calibrated uncertainty estimates for cases that are ambiguous, or difficult, or out-of-distribution (OOD), including those from a new class [also HERE]."" ECE should also be cited since it is not invented in the current work. In Eq.4, what is that fancy 'F'? In summary, the paper has a good motivation and seems to go in the right direction. However, the paper is in its infancy and not very convincing in its present form. """,2,0 midl20_77_1,"""This paper proposes a super resolution method for diffusion MRI, called dMRI-SRGAN. The authors compare this method with a previously published SRGAN approach. This SRGAN approach are composed of two main components: a generator network and a discriminator network. The generator and the discriminator attempt to satisfy two opposing objectives. The new theory presented is interesting. The authors indicate that the proposed method gives a better Peak-Signal-to-Noise-Ratio (PSNR), and better connectome analysis data, compared to a few recent methods. The result presented in Fig 2 for the new improved dMRI-SRGAN method is disappointing. The rightmost figure depicting the result has horizontal stripe artifacts not present in the previous methods (top), and the lower image is quite blurry. The result presented in Fig 2 for the new improved dMRI-SRGAN method is disappointing. The rightmost figure depicting the result has horizontal stripe artifacts not present in the previous methods (top), and the lower image is quite blurry. I would have given a higher rating with more impressive results. """,2,1 midl20_77_2,"""The authors proposed dMRI-SRGAN, a super resolution method for MRI data, which aims to reduce the reconstruction error of the SR with additional information in a self-supervised way. The additional information is added by using volumetric labels generated with different b-values. The proposed method is compared to SRGAN, the baseline method and semi-SRGAN to demonstrate its effectiveness. -dMRI-SRGAN perform the super-resolution task in a self-supervised manner, using intrinsic volumetric information. Without requiring additional manual labelling, the method achieves similar or better performance in some cases. - dMRI-SRGAN are compared quantitatively and qualitatively to related methods, SRGAN and semi-SRGAN, with signal analysis and connectomic analysis. The capability of the new method is demonstrated in different aspects. - the motivation and contributions of the method are not clearly stated, it would be great if the author could list them in the introduction section, and the abstract should also be more straight-to-the-point to state the advantage of the proposed method. As they are unclear, the technical contribution seems limited. - it is understandable that semi-SRGAN performs the best in many cases, however dMRI-SRGAN does not out-perform SRGAN, the baseline method, in cases such as CPL and GE, what might cause this? The method uses intrinsic information for super resolution instead of manual labelled data, the results are comparable to semi-SRGAN and the idea is interesting. However, details of the proposed method are lacking, and the motivation to this type of self-supervision, i.e. labels from b-values, is not very clear, therefore the novelty is not sufficient.""",2,1 midl20_77_3,"""This paper proposed SR methods for diffusion MRI using adversarial learning. Compared to previous SRGAN, the authors improved the results by adding extra information and constraint. The authors demonstrate that the proposed method is effective for better FA/DWI reconstruction and structural connectome. The authors proposed a novel generative adversarial network for dMRI super-resolution. The authors utilized additional inherent information, i.e. label for each shell, in the dMRI. Experimental results demonstrate that the proposed method outperforms the baseline methods. - It is not clear how the authors generated LR DW images from HR images with downsampling. - In figure 1, the authors used the same notation D for the discriminator and volumetric segmentation network (although the discriminator was italic D). Please use different notations for clarity. - In the definition of loss functions, except for the adversarial loss, the definitions of the reconstruction loss and segmentation loss were not explained. - The authors introduced volumetric label which is unique for each shell. Unlike the tissue segmentation label, this volumetric label should be one single value for each DW volume. Do the authors still need upsampling operation to obtain the same size of HR image? Also, DW images for each shell have different intensity ranges, which can be easily differentiated. - The improvement is marginal or sometimes worse, and the reconstruction results are not close to the ground truth (Figure 2). Although the authors proposed a novel dMRI SR framework, the paper is lack of details especially in Method and Experiments section. Specifically, the way how to generate LR images from HR images is unclear.""",2,1 midl20_77_4,"""Authors propose a super resolution (SR) method with the specific objective of increasing spatial resolution of diffusion MRI (dMRI) images. The idea is to use generative adversarial networks (GAN) to achieve this. The key idea is to combine 2D SR images obtained from axial and coronal planes to form a 3D SR image. Results are visually shown on a single cross section of the whole brain image of FA and B0 images. Comparisons are also provided from connectomics points of view. - Authors address a relevant problem. There is need to increase resolution of dMRI images. - Deep learning techniques have potential to achieve the mentioned goal. - The manuscript is well written. The content is well organized and use of language is adequate. - Some important prior work were not mentioned. - Methods is not very clear. Enough information about the network architecture was not provided. It is not clear how the 2D images were combined. - Results are not convincing. Only 1 figure with limited information is provided. - No comparisons against regular interpolation are provided. - There is no justification for the connectomics analysis. The results for the low resolution (LR) image is not provided. Authors propose a super resolution approach but the experimental setup and results are not convincing. Only one figure is provided to qualitatively evaluate performance of the technique and it shows severe artefacts visible for the proposed technique. Also quantitatively no comparison against basic interpolation techniques were provided. There are holes in the method and it is not clear how 2D projections are combined together to yield a 3D image.""",1,1 midl20_78_1,"""This paper reports a study using texture features instead of initial image intensity values, for classification on neuroimaging data. Concretely, local binary pattern (LBP) is used for the experiments with two different radius values. A comparison with using initial image intensity values, however, does not show any advantages. This work seems to be the first study of this kind on neuroimaging data. It is certainly something interesting to investigate for a particular application domain. The binary local pattern is also reasonable due to its popularity. The technical novelty is low. In addition, several essential issues remain untouched. Overall, it is a very preliminary study only. The finding is not a surprise. Basically, it confirms the results that have reported for other domains, in particular natural images. In all test cases the performance of using intensity values turns out to be, partly substantially, higher than using texture features. Therefore, what is the gain of this study? Given the extra information available in the texture information one may expect to receive higher performance using net architecture of the same complexity. Alternatively, one may expect to receive the same performance using a slimmer net architecture. Such important issues are not discussed. There are other reasons why this is only an initial study only. Why are only radius values of 1 and 10 pixels used, not something in-between? The authors leave it future work to explore whether the findings can generalize to other measures of texture, or whether they are specific to the LBP algorithm. It is straightforward to repeat the same experiments for other popular texture features. Despite of being the first study of this kind on neuroimaging data, this study is a very preliminary one only. A number of important issues remain untouched, or partly leaving as future work. The overall value of this study is unclear.""",1,1 midl20_78_2,"""The premise is to test the texture hypothesis introduced in the ML literature, for the first time, in a newborn neuroimaging application. The goal is to represent neuroimaging data as local binary textural maps and learn accurate segmentation models using those instead of image-based representation. The image texture information is encoded in the form of local spatial patterns (LBP) that are easy to compute and are intensity invariant. DeepMedic is trained for a 10 class segmentation with data augmentation and the outcome is compared to the DHCP solutions via Dice overlap coefficients. The proposed pipeline is applied to an interesting and rich MRI data set from the recently released DHCP2 cohort of newborns. The application area is new and well deserving. The results are promising when compared among intensity-based solution (segmentation from the default DHCP processing pipeline) and two different versions of the proposed pipeline. I found the premise of the work interesting. I wish the authors took it a step further and looked at finer segmentation labels given that the Dice overlap metric is very forgiving when computed for large ROIs. Using the more detailed segmentation (~90 labels) from the DHCP dataset could be more informative and could give more insight into design and interpretation. It might also help with the granularity vs ROI intensity profile discussion of the authors. It is a well-written paper, with well-thought-out experiments and insightful discussion of the results. The technology is not new, but the application area (pediatric neuroimaging) is, and the results are promising.""",4,1 midl20_78_3,"""In this work, the authors attempt to provide additional evidence for the theory that CNNs rely heavily on texture information and mostly ignore shape information in the input. To support this theory they authors compute local binary patterns (LBP) of the input images then train and test CNNs based on these images for the task of segmentation from T2-weighted MRI brain scans. The authors show that performance is mostly maintained after using the LBP as compared to the pixel intensity values, with regions bordering the background class showing the most significant performance decrease. 1. The related works and motivation is clearly laid out. There have been several works in recent years (cited by the authors) which show CNNs are heavily biased towards texture information over shape information. 2. To the best of the reviewer's knowledge, this is the first work to focus on segmentation while examining the importance of texture over shape. An interesting analysis on the surface since, as opposed to classification and regression, segmentation requires fine-localization. 1. The contributions of this work are somewhat minimal. As the authors state, there have already been several works that show CNNs rely almost exclusively on texture information over shape, in standard computer vision data and even brain MRI. 2. While the authors try to go for a nice angle of demonstrating CNNs relying on texture for the task of segmentation, the algorithm works based on local binary patterns which remain highly-structured. Thus, to make the argument that shape information is not being learned is very hard to justify. When looking at Figure 2, one can see majority of the shape information is retained (and really shape information has to be retained for any segmentation network to have any chance of defining region boundaries). 3. There is no practical benefit to training and testing on LBP data that is apparent for this application domain. This paper is clearly meant as a theory/validation paper. There are no practical benefits to the proposed method and no application benefits. However, from a theory point of view, the paper is just providing some minimal evidence of a theory which has already been shown in many works. The reviewer is not convinced that using local binary patterns for training can guarantee the network is using only texture information (i.e. all shape information has been removed). Further, there is nothing here which can provide some new key insights to future researchers and spawn new research areas.""",2,1 midl20_78_4,"""this paper proposes modeling neonatal MRI brain images in terms of binary texture operators. classification results are shown. ____________---_____--__-__--------_-_--_____________---_____--__-__--------_-_--_____________---_____--__-__--------_-_--_____________---_____--__-__--------_-_--_____________---_____--__-__--------_-_--_____________---_____--__-__--------_-_--_ front end texture operators in a deep learning context are interesting ____________---_____--__-__--------_-_--_____________---_____--__-__--------_-_--_____________---_____--__-__--------_-_--_____________---_____--__-__--------_-_--_____________---_____--__-__--------_-_--_ unfortunately, the authors are not aware For example, texture operators have been used to predict , in the form of local 3D SIFT features to predict infant 'A feature-based developmental model of the infant brain in structural MRI' Toews et al., MICCAI 2013 Furthermore, the work of Toews et. al. performs age prediction in early stages of infant neuroevelopment, including the mylenation phase with contrast inversion between white and grey matter, from 0-104 weeks (0-2 years of age) It is unclear how the present approach would cope with appearances changes such as contrast inversion, since the work here is restricted to gestational ages at scan: 24.3 - 42.2 week, a rather narrow interval. The primary challenge of Mylenation and contrast inversion typically occurs around 12 weeks, much early that the narrow 24.3 - 42.2 week investigated here. The authors appear unaware of developmental MRI changes, including mylenation. The same 3D HoG-SIFT keypoint approach used to discover labelling errors in the OASIS, ADNI and HCP datasets previously unknown to the neuroimaging community, and source code is available, so the authors here should at least be aware they could potential compare to a highly effective texture-based approach: Chauvin et al., NeuroImage 2020 'Neuroimage signature from salient keypoints is highly specific to individuals and shared by close relatives.' Uncited literature, practical challenges, previous work. ____________---_____--__-__--------_-_--_____________---_____--__-__--------_-_--_____________---_____--__-__--------_-_--_____________---_____--__-__--------_-_--_____________---_____--__-__--------_-_--_____________---_____--__-__--------_-_--_____________---_____--__-__--------_-_--_""",2,1 midl20_79_1,"""The authors present a reinforcement learning approach to landmark detection. They show that the same model can perform several different tasks (nipple detection, prostate detection and organ detection in MRI) and show the effectiveness. They argue that such an approach is better than having different models for different tasks. The tasks presented are not that novel, and neither is the method, but it is an interesting result nevertheless and can be used for other researchers to be expanded upon. - It is interesting to see RL applied to landmark detection in medical imaging - The authors show the model performs well on several different applications and MRI sequences - The evaluation is bit limited, but appears sound - I see no use for nipple detection in MRI. I can imagine that this works as there typically is only one nipple, while you can have several lesions. I would like to see argued better why this is relevant. - There is no comparison with other methods to detect these organs. While I see that they likely are more task specific, setting everything up well should allow to retrain the models for a different task as well - The method is purely 2D, which is nowadays uncommon for medical images. - Application paper. Neither method nor task is new. - It is an important question in medical imaging. - Experiments have been well thought, but are sometimes lacking in relevance. - Methodology appears sound.""",3,1 midl20_79_2,"""The aim of this paper is to develop and implement a multitask modality invariant deep reinforcement learning for landmark localization and segmentation in radiological application. Topic is interesting. However, there are several concerns on this paper. The strength of this paper. 1. Good topics to implement a multitask modality invariant deep reinforcement learning for landmark localization and segmentation in radiological application The weakness of this paper. 1. In case of bounding box, please use intersection of union (IoU) metric. 2. There is a lack of ablation studies 3. What's the FROC evaluation on this detection issues? In test set, an image don't have interesting related regions necessarily. 4. In table 2, the unit of distance error is pixels. However, pixel number depends on the FOV of images. Therefore, I recommend to use absolute mm as the unit. 5. small number of samples in training and test dataset. In addition, there is no external or cross-validations. Good topics to implement a multitask modality invariant deep reinforcement learning for landmark localization and segmentation in radiological application to show feasibility of training a single deep RL agent for multitask modality invariant applications. 1. In case of bounding box, please use intersection of union (IoU) metric. 2. There is a lack of ablation studies 3. What's the FROC evaluation on this detection issues? In test set, an image don't have interesting related regions necessarily. 4. In table 2, the unit of distance error is pixels. However, pixel number depends on the FOV of images. Therefore, I recommend to use absolute mm as the unit. 5. small number of samples in training and test dataset. In addition, there is no external or cross-validations. there are several concerns on this paper. 1. In case of bounding box, please use intersection of union (IoU) metric. 2. There is a lack of ablation studies 3. What's the FROC evaluation on this detection issues? In test set, an image don't have interesting related regions necessarily. 4. In table 2, the unit of distance error is pixels. However, pixel number depends on the FOV of images. Therefore, I recommend to use absolute mm as the unit. 5. small number of samples in training and test dataset. In addition, there is no external or cross-validations""",2,1 midl20_79_3,"""This paper proposed to use a single DQN agent to localise different landmarks in different MRI imaging sequences. An evaluation is provided for localizing six different anatomical structures throughout the body, including, knee, trochanter, heart, kidney, breast nipple, and prostate across T1 weighted, T2 weighted, Dynamic Contrast Enhanced (DCE), Diffusion Weighted Imaging (DWI), and DIXON MRI sequences obtained from twenty-four breast, eight prostate, and twenty five whole body mpMRIs. - RL is promising for assistance tasks - the angle of this work is interesting: Train a single agent for many modality-dependant tasks. This hasn't been done previously. - the data-set and resulting training environment is interesting - the authors claim to introduce the 'MIDRL' framework, which implies novelty. However, the used framework is a DQN with various training modalities. No special framework is introduced, that would multi-modality environments make possible. - The task and modality are dependant. The CNN might just use it's capacity to propose actions for each modality independently. Some overlap might exist for the tasks because the same organ is targeted in several multi-modal acquisitions. What would happen if different target landmarks in the same modality would define the task? - This is a 2D method that will always return a localisation for each slice as stated in the paper's discussion. ""The deep RL framework in its current implementation is not equipped to return NULL when a target is not found."". How are all the slices handled during the evaluation that don't contain the landmark? Is this only evaluated on slices that are guaranteed to contain the landmark? - what hasn't a 3D method been used as common in the frequently cited related work? - It's not quite clear why are there results missing in the tables? - the contribution is overstated. The authors' argument is that previous 'models were limited to a single anatomical environment' and that their method is the first that provides 'a multi-environmental model'. As soon as several landmarks need to be found in each of the modalities, this approach breaks down. First, as shown by Alansary et al., different landmarks require different training and DQN strategies and as shown by Vlontzos et al., multiple landmarks perform best with multiple agents that share weights. Testing Vlontzos' work in the proposed multi-modality environment for multiple landmarks per modality would be interesting, but this hasn't been done in the manuscript. The paper is ok, but would probably be better accepted as an abstract with poster. RL is clearly an interesting area for medical image analysis and should be discussed at MIDL, however the paper should not be published as full paper in its current form. """,3,1 midl20_80_1,"""This paper proposes a CNN approach for head and neck cancer outcome prediction. A better motivation of the method is needed and clear statements of what is shown in the experiments. More detailed comments are provided in the following: I think that the main message of this paper is in the introduction In this work, we show that combining PET and CT image inputs improves This is not in the abstract while it seems to be the only point made here. In 1. Introduction: We show that these three modifications is not supported by the experiments/results. No evaluation is made on new centers ? unlike sort of motivated in the abstract. What is the motivation for UNet + CNN? It is only mentioned in conclusion and should be given before. It seems like the validation set is not used. Early stopping is not performed on the validation set? There are typos in Section 3. In the introduction, the radiomics method of Vallires et al. is mentioned with CT and PET but only with CT in Table 1. """,3,0 midl20_80_2,"""This paper performs hand and neck cancer outcome prediction by ResNeXt with a U-Net for preprocessing. However, the idea of Using a trainable U-Net as a preprocessing step is preciously used in segmentation task, so the methodological contribution is small. In addition, the experiments are not very convincing. With the full proposed architecture, UNet-ResNeXt has the same AUC with a previous study (Diamant et al. 2019). A reduction of a small proportion of parameters is a very small advantage. Of course, with the addition of PET image, the AUC is increased to 0.76, but I think this is a natural result, considering the significant role of PET in staging the tumor. Its a natural thought that the performance of (Diamant et al. 2019) can be equally improved by incorporating PET images. Without the need of GTV mask may be an advantage of the proposed method over (Diamant et al. 2019), but this is uncertain without direct comparison of accuracy. The paper is generally well written and easy to understand, but the first two sentences of Abstract are a little misleading. From the first sentence, it seems that this paper wants to deal with the problem originated from multi-domain datasets. However, there is no study about cross validation among different hospitals, though images from four hospitals are used. Regarding the second sentence, I dont see any priors as constraints in training in the experiment. """,3,0 midl20_80_3,"""Summary: The authors proposed a UNET preprocessor and a ResNeXt classification network for survival prediction. Improved performance is observed when using UNet-ResNeXt and PET+CT as input. This is an interesting study and provides possible insights for other survival analysis during their model developments. But I think the authors need to do more ablation study and provide training details to let readers understand and use their work. Major Concerns: 1. The authors claimed that the proposed model without requiring manual GTV segmentation annotations, but it seems the model needs to select largest primary GTV slice. If there are no GTV masks, how can you select largest GTV slice ? 2. It seems PET can help improve results a lot. Have you tried model only with PET ? 3. How did you compare with other baselines ? Are you using official split from Diamant et al. (2019) ? 4. Did you use any data augmentation during training ? Did you perform early stop because you have a validation set ? 5. What is the data distribution for death and survival ? How about Specifcity and Sensitivity ?""",3,0 midl20_80_4,"""The paper evaluates the performance of a model based on a UNet pre-processor followed by a ResNeXt classifier for survival prediction in head and neck cancer patients. The proposed model uses a 2 channel input consisting of corresponding slices from CT and PET volumes from a publicly available dataset. This is a well-written paper, and the descriptions of the method and experimental setup are clear and unambiguous. The results apparently outperform the state-of-the-art for this application and the model uses fewer parameters. Overall I am happy for this paper to be accepted but there are a few questions that need clarifying. First, in Table 1, there is a reduction in performance of 5% AUC between the Diamant et al method and the basic ResNeXt model. The obvious question is whether this is due to the different architecture or the different input (i.e. GTV masked CT slice and full CT slice). Can the authors comment on this? Also regarding Table 1, and depending on the answer to the first question, would it be possible to get even better results by combining the Diamant et al model with the UNet pre-processor and/or the PET data as input? Other specific suggestions: Title: Convolutionnal should be Convolutional Section 3, paragraph 2: the the - remove repetition Section 3, paragraph 2: can be considered can be considered remove repetition """,4,0 midl20_81_1,"""The paper describes an adaptation of the VQ-VAE for 3D (medical) data with an adaptive reconstruction loss function, ultimately facilitating high reconstruction fidelity for complex 3D data. To date, this has been very challenging due to limitations of computational resources and is thus highly relevant. In fact, this holds potential to enable a plethora of future research in the field of 3D medical image analysis. - Clearly motivated and well written - Trying to solve a relevant problem: high fidelity reconstruction using VQ-VAEs in 3D data - This opens up many opportunities for tasks such as unsupervised anomaly detection - Testing for statistical significance is highly appreciated - Very interesting loss formulation, which can surely be useful in many other settings as well - I feel the methodology could be expanded to discuss multiple design choices and formulations in greater detail - For instance, the proposed VQ-VAE model has something that I would refer to as ""skip-connections"", i.e. modeling / compressing features at different scales. In this context, I believe a comparison to a normal VAE (or Autoencoder) which has such connections / compresses at multiple scales would be required and show the real benefit of such VQ. - Generally, I feel the variational part of the method was hardly addressed. I believe that exemplary image synthesis using the VQ-VAE (it is supposed to be a generative model after all) or residual-based anomaly detection would have strengthened the paper a lot - I suspect some details on the loss function from Baur et al. are missing: I would assume that the weighting of the different loss components plays an important role. How were the different terms weighted during optimization? - The baseline the authors compare against operates at a very different resolution as the VQ-VAE does; In this context, I am not sure whether the provided metrics are all valid, e.g. the Dice-score might be generally lower at lower resolutions. Can the authors please elaborate more on this validity? I consider this work an enabler for future research on 3D medical image analysis in various directions. In addition, the paper is well written. Thus, the work deserves to be presented to a wider audience, if experimental design choices can be elaborated more and additional baselines can be added (if appropriate).""",3,1 midl20_81_2,"""The paper modifies VQ-VAE for encoding a full-resolution 3D brain MRI to a small percentage of the initial size while preserving morphology. The modification includes replacing the 2D convolution blocks with 3D blocks and using recently proposed initialization and loss functions. The proposed idea is evaluated on the standard dataset, and the presented results demonstrate improved performance in terms of image consistency metrics. Adaptation of widely popular VQ-VAE into 3D setup is one of the biggest strengths of this paper. This extension, especially in the medical imaging domain, could be significant as working with 3D data has been one of the critical challenges of this field. Having a high-fidelity 3D model should also encourage learning and analyzing latent space for these applications. The proposed methodology seems to be a holistic approach to integrating useful properties from previous works in one place. In this sort of methodological contribution, it is natural to expect ablation studies that analyze the effect of such inclusion. Not just ablation study, even proper discussion about some inclusion seems missing. For instance, it is not clear why the loss function (Baur et al., 2019) was used. Also, the paper is poorly written, with claims not backed by the proper references and numerous errors. For example, there are claims like modeling ""gradients instead of pixel intensities which almost always work better"" without any reference. Although paper aims towards solving a significant problem in medical imaging analysis, the motivation and the experiments are not aligning. As explained earlier, the ablation study is critical when the proposed methodology is presented as a combination of prior works. Furthermore, the paper is not well written, and the usage of some approaches are not clearly explained. """,2,1 midl20_81_3,"""The paper extends VQ-VAE architecture to 3D images and then shows that this VQ_VAE can yield good reconstruction (including neuromorphology preservation). The reconstruction is compared to another paper based on VAE + GAN architecture. Experimental section has both qualitative and quantitative comparison of the reconstruction with the VAE+GAN baseline. Looking at the Dice GM, WM and CSF, the VQ-VAE clearly does better job than the baseline method, pseudo-formula -WGAN (Kwon et al.). Similar results can be seen in Fig 2 , 3 and 4. It is good to know that VQ-VAE can be used to compress, although the application of that should be well motivated. The paper has limited novelty. The idea of VQ-VAE including how to compute KL divergence with the prior and how to update the latent codes etc are used from the original VQ-VAE paper. Using 3D convolution and U-net type skip connections are the architectural choices that seem new in this paper. I have major concern with the applicability of this paper in the medical imaging context. Where do we use this? It is well known that VAE type architectures encode information in the latent space and usually compress while doing so. This paper shows that we can do similar thing by using VQ-VAE in the brain images. But, the applicability of their work in medical imaging is not well motivated. Can't we use numerous other compression algorithms in computer vision to compress the data? Why spend so much resource in training deep network to obtain data compression? I have major concerns with the novelty and the relevance of the VQ-VAE in the medical imaging in general and brain image compression task in specific. In addition to that, other concerns have been explained in the Detailed comments section.""",2,1 midl20_82_1,"""It is potentially interesting to normalize logits and make the CAM more interpretable. However, it is not clear how to implement the proposed method. To be specific, the weights for the pooling layer and the network outputs depend on each other (illustrated as a loop in Figure 1). I am not sure how to train this network in an end-to-end fashion. It would be more clear to provide more details in the caption of Figure 1. Additionally, the experimental results are not promising. Based on Table 1 and Figure 2, the P-CAM has higher accuracy compared to the baseline but has poorer performance on the false positive rate. Ergo, the significance of P-CAM is not fully justified. """,2,0 midl20_82_2,"""The key idea is to add a loop in the model to utilize the cam as a weight to improve the localization capability of weakly supervised method. The method is applied to ChestX-ray14 dataset and compared with simple CAM. The method shows potential better localization capability in a weakly supervised fashion. The method in principle is able to produce sharper heatmaps for lesion localization than simple CAM. However, the false positives seem pretty severe, just looking at Figure 2. I am not sure which one is more preferred especially if the threshold can be adjusted. I am also concerned about the impact on classification accuracy with the introduced structure. Classification accuracy is also important but not reported in the paper. Detailed comments: - How was the 0.9 threshold determined? This seems to be an important (hyper)parameter. And whats the threshold for the regular CAM? - More weakly supervised methods in medical imaging should be compared or discussed. - Figure 1 is confusing. Maybe try to color code to differentiate the classification and CAM paths. - Curious to see how the results compare with simply squaring weights in the original CAM. """,2,0 midl20_82_3,"""Prop: 1. This paper propose a probabilistic-CAM pooling to bridge the pixel-level localization and image-level classification. Normalized weights are used for weighted average pooling. PCAM explicitly leverages the CAM in a probabilistic fashion. Cons: 1. One advantage of PCAM is that it is trained like a probabilistic model. However, no significant improvement are shown that PCAM is accurate than other CAMs. Larger output leads to better IoBB and worse false positive. 2. I think the author should explicitly point out the difference between this work and related work(Ilse et al., 2018). Why sigmoid function is used for bounding. Tanh and sigmoid are used in (Ilse et al., 2018). 3. I'm wondering whether PCAM can contribute to the classification accuracy compared to other CAMs. Other comments: 1. Cannot understand ""This may explain the fact that PCAM pooling has relatively larger average false positives than CAM with LSE pooling"", this means PCAM is worse than LSE pooling ? 2. More detailed description of the architecture is preferred. The architecture seems to be different with the architecture used in (Ilse et al., 2018). More details would help reproduce the work. """,2,0 midl20_82_4,"""This paper presents a pooling strategy for Class Activation Maps (CAM) to learn to localize thoracic diseases using image-level supervision. The evaluation is performed using the ChestX-ray14 dataset. Overall, the paper is clear and easy to read but misses on a large part of the literature in weakly supervised learning. My main criticisms are: 1. The idea is not novel and should not be presented as such. Several papers have been published using similar or identical poolings in both audio and image processing. 2. The method is compared against only one result (Wang et al., 2017) which was obtained on the ChextX-ray8 dataset, while the paper is mentioning ChestX-ray14. 3. The AFP is much higher than the LSE pooling. It is not surprising for a method with much higher AFP to also have higher IoBB. Generally, the results are not convincing. Other remarks: 1. (Ilse et al., 2018) should not be the citation for the MIL framework. 2. The MIL framework is not necessarily assigning attention weights to each embedding. This is one way to do it but there are others in the literature. 3. Several choices that were made for the evaluation are not justified: the choice of 0.9 for the threshold, the comparison of IoBB > 0.5 (a table in supplementary material could have been added for other values).""",2,0 midl20_83_1,"""The authors look into machine learning model generalization among large scale chest x-ray datasets. This work aims to provide supporting evidence for which diagnosis tasks are consistent across datasets and what causes the issue of generalization if there's any. The paper concludes that the poor generalization is not due to a shift in the images but instead of a shift in the labels. The problem that this work looks into is interesting and of growing importance, given the fact that more large scale medical image datasets have become publicly available while the number of high-quality labels is still limited. ""It domain shift was only present in the images then it is unexpected that we would observe over half of the tasks perform well while the remaining have very variable results"". I agree, but how does this lead to ""the issue of generalization is not due to a shift in the images but instead a shift in the labels"". It is possible among the several datasets, some datasets are ""closer"" because the hospitals might have used the same x-ray manufacturer or have used similar acquisition protocol. I don't see why ""a shift in the images"" is ruled out. The problem that this work looks into is interesting and of growing importance, given the fact that more large scale medical image datasets have become publicly available while the number of high-quality labels is still limited. Some conclusions might be debatable but the work will forge meaningful discussions in the MIDL community. """,4,1 midl20_83_2,"""The study aims on quantification of x-ray based diagnosis, and explore generalization of the algorithm across multiple different data sets. Authors claims that the generalization gap is due to label shifts, not image amount. Authors used multiple models and data sets to support their claims, results are promising, evaluations are convincing. -- authors have some interesting conclusions for generalization issue. Those are important take home messages, I believe. For instance, authors shown that even if a model trained using all data sets performs well, it does not necessarily reflect true generalization performance. It shows that not the multi-center, but some other measures are more important for generalization (claim is label shifts). -- experiments have been done on multiple data sets, available at public resources. Evaluations seem appropriate. -- Three baseline models are strong enough, and questions raised for the generalization have been answered over these models (equation 1, weighted model combination). --(minor) the paper is in the validation category, not really methodological because innovation is limited ( I like the exploration on generalization gap, though). -- it could be interesting to see the same problem from transfer learning perspective where models are learned from the same source but fine tuned later on specific data, that can perhaps identify further issues that are not entirely known in ML community. The study ask valid questions about generalization gap, and experiments show some interesting results, multiple data sets were used and multiple models were combined to do classification on x-ray data. Technical innovation is limited, but validation perspective is strong enough for a poster presentation.""",3,1 midl20_83_3,"""This paper evaluates cross-domain generalization in X-ray classification tasks. The key idea is to investigate and shed light on the important problem of cross-domain generalization. The paper concludes that label-shift is the main factor that hinders cross-domain generalization. Significance: cross-domain generalization is an essential problem in medical imaging; however, the argument of label-shift is unwarranted and unconvincing. 1. The paper is well-written and easy to understand. 2. The problem of cross-domain generalization is very interesting, and the authors studied this problem on seven datasets. 3. The analysis of the model agreement provides an interesting perspective for understanding cross-domain generalization. The main weakness of this paper is that the argument of label-shift is unwarranted. 1. Performance: it is surprising that no domain adaptation or domain generalization models are evaluated, given the topic being cross-domain generalization. If the authors would like to argue that *label-shift* is more important than *covariate-shift*, it would be important to provide evidence to show that current methods for addressing covariate-shift are not suitable to cross-domain generalization in X-ray classification tasks. Otherwise, the argument that the issue of generalization is *not* due to a shift in the images but instead a shift in the labels is unwarranted and unconvincing. 2. Agreement: it is not very informative to conclude models can disagree yet still perform well, because models could agree/disagree in many different ways. Highly-agreed models are not necessarily better if the agreement is merely on misclassified labels. 3. Model agreement vs. data agreement. The authors use the disagreements between *models* (in section 5) as evidence to argue inconsistencies between *human annotations* (in the abstract and introduction). However, the disagreement between models could arise from the stochastic nature of optimization, which does not necessarily reflect the inconsistencies between human annotations. The inductive leap from model inconsistency to label inconsistency is not scientific. 4. Representation: the mean difference between weight vectors is not necessarily comparable because it could be in-part determined by the magnitudes of weight vectors. Moreover, this could also be confounded by overfitting, where higher magnitudes of weight vectors are more likely to overfit. 5. This work presents evidence that the issue of generalization is *NOT* due to a shift in the images but instead a shift in the labels. Is the shift in label referring to the prior probability shift? Is the shift in the images referring to the covariate shift? This is important because ""prior probability shift"" and ""covariate shift"" have precise mathematical definitions, whereas ""shift in the images"" and ""shift in the labels"" are unclear. I would appreciate more consistent notations within our research community. 6. If domain shift was only present in the images then it is unexpected that we would observe over half of the tasks perform well while the remaining have very variable results. Domain shifts could have different degrees of impact on different tasks, which could also depend on the nature and difficulty of those tasks. The reasoning that ""over half of the tasks do not suffer from domain shift"" does NOT automatically imply ""domain shift is not an issue for the remaining tasks"". 1. Interesting problem setup, but the main argument on label-shift is unwarranted. 2. Model disagreement is confounded with annotation inconsistencies. 3. The lack of domain adaptation baseline weakens the value of this paper. I think this is a paper with potentially high impact on our research community; therefore, sound reasoning and reliable evidence are necessary, which this manuscript is lacking, so as not to mislead future research.""",2,1 midl20_83_4,"""The paper uses multiple large-scale chest x-ray datasets and examines whether the well-known issue of domain-shift is caused only by image appearance or whether there is in fact label-shift due to the culture/protocol/automatic-labeller etc. that lies behind the labelling process. It investigates 3 elements of domain shift: Performance ( the performance of models trained on one dataset and tested on others), Agreement (how well models trained on different datasets agree with each other when tested on a common test set), Representation (how the internal representation of a label differs between models trained on different datasets). The idea of the paper is very interesting and certainly the theory that labelling varies between different public datasets is one that is intuitively sensible and requires investigation. The authors have conducted many in-depth experiments and found some interesting results. This is the first serious effort I have seen to investigate this difficult issue. I found parts of the paper were poorly explained and difficult to follow. There are many combinations of networks/datasets/experiments with training and testing implemented in different ways and the wording and explanations could be clarified more carefully. I am not sure that I agree with all the conclusions of the paper, however I think they would make for interesting discussion. I think the premise for the paper is good and the experiments are thorough and detailed however the text needs some work to improve clarity and the authors should spend more words to back up their claims of evidential label-shift. """,3,1 midl20_84_1,"""The authors present computational improvements to speed up training in medical image analysis. They propose a joint system that combines ""smart caching"", adaptive patch sampling ratios, the NovoGrad optimizer, mixed precision computation and multi-GPU use. The presented results show a 12x - 26x reduction in training times over a baseline. If true, this would be a significant step forward. The main strengths in this paper lie in the detailed description of the proposed method. The schematic figure (figure 2) also helps convey the pipeline strategy. Results are well organized into tables that are easily understood. This paper has three main weaknesses: 1) it does not discuss extremely similar ideas 2) the implementation description for the baseline practically none and 3) the manuscript itself is not of publication quality 1) Related work: ""smart cache"" is presented in this paper as a completely novel idea, but pre-loading data into RAM is not a new concept. For example, TensorFlow has advanced pre-fetching of data (pseudo-url). Pytorch and other frameworks have similar features. None of these are mentioned at all in the paper and no performance comparison is made 2) The baseline is severely lacking. It is not conveyed what training set-up was used. A number of trivial settings (such as not using built-in data prefetching) can cause training times to increase 20x and therefore make the comparisons meaningless. 3) The paper contains a very high number of grammatical and spelling errors. Screenshots of tensorboard are not publication quality graphics, and Chapter 3 ""Execution model"" contains no detail on the execution model. Two of frameworks discussed are misattributed. It will require significant rewriting and editing before publication. Although the motivation is clear and reported results claim a significant improvement it is impossible to tell whether the difference stems from any of the methods discussed or from implementation settings for the baseline that cause slow training times. For a computational complexity argument there is a lack of theoretical bounds (even approximations thereof) Similarly, many of the ideas here (e.g. ""smart caching"") are very similar to available features (e.g. pre-fetching in tensorflow). The authors have not discussed what the advantages or limitations of these two are.""",1,1 midl20_84_2,"""Analyzing high-dimensional medical images (2D/3D/4D CT, MRI, histopathological images, etc.) plays an important role in many biomedical applications. In this paper, the author provides a new solution for improving model training efficiency roughly 20X faster than the original training process. The motivation of this paper is to enable researchers and radiologists to improve efficiency in their clinical studies. Model compression is an important problem in the deep learning domain. There are many works on CNN/RNN compression since 2016. However, there is a lack of work for model compression on the 3D medical image problem. The motivation of this paper is to speed up the training process rather than the inference time or compress the model size, which is very useful for medical image community The dataset and network configuration were described very clearly. 1. I would suggest the author shows the comparison snapshot (before compression vs. after compression) to better illustrate the model efficiency. 2. Is the section 3 is the proposed method? and it confused me that seems like a similar description as section 2. I feel like these should all go into the motivation section. 3. The ""Smart caching and smarter caching"" and ""Adaptive positive/negative sampling ratio"" should go to the method part, not the experimental settings. 4. I understand the intuition and motivation of this work , however, could the author clearly state what is new here or what is the novelty of this paper? From the current draft, I cannot find what specific new thing in the method. Based on the current draft, there are many things not clear. For example, the novelty or the ```''new'' idea here and the after reading the whole draft I still not really follow the proposed method. I would give a weak reject. Unless, in the rebuttal, the author can address all my questions. """,2,1 midl20_84_3,"""The paper presents a framework to speed up computation time and memory consumption of machine learning models in the medical image domain. Medical images and scans are large in data due to their rich nature, so problems in the field arise from not being able to fit the entire network on a GPU, slow data reads, and not fully exploiting cache. The authors propose smart caching, saving intermediate results in the RAM, reduction of training data to 16 bit floats, using multi GPUs, using an optimized tailored for fast convergence in order to achieve a training late of overall 20 times faster than before. Main contribution/strength of the paper is to achieve substantial improvements in the training times for large scale medical image models. By exploiting a myriad of fine techniques at different steps of the entire system, overall speed up, which is 20 times improvement in the speed, is substantial and offers a new venue for the adoption of their frameworks in the field by anyone who is dealing with large scale medical image dataset/system. Other strength of the paper is that it plays the role of a handbook for anyone who is interested in improving their AI models without undergoing a complete change.One can read this paper and adapt one or some of the steps as guidelines in order to speed up their process, instead of fully rewriting their code base, or changing their infrastructure that they used for their research. This paper is being evaluated on the improved methodology category even though some portion of the steps that the authors took fall into validation because for some steps, they simply reuse existing improvements noted in the literature again. That being said, some optimization techniques utilized by the authors are as simple as changing the bit size during training, or changing the optimizer, or changing the GPU count. In terms of a validation category, this paper reports impressive results since they are definitely a sign of improvement in the computational running time. But usually, for a framework like this, if there is a speed up in one section, there is a slow down in other sections. I would be interested in learning how much of an overhead their preparation took when they wanted to apply their framework. Overall, the results show improvements since one can assume that the overall cost of adapting this framework from scratch should not be more than the initial cost of training the baseline AI model, lets say Native TF , which was 13 hours in for Spleen scans for one their cases. This paper has a very practical objective, that is to increase speed ups. The authors seem to miss the discussion on the preparation time/ overhead caused by their framework. That is, how much change does one need to do in their current implementation of machine learning model creating process in order to adapt their framework? Also, the majority of the improved steps in their framework is as simple as changing the GPU count, or changing bitsize of numbers, or changing optimizers, which doesnt offer a great novel approach rather than being an aggregation of previously reported refinements in the literature. It is definitely a useful paper, but not sure if this is the right medium for publication.""",2,1 midl20_84_4,"""In this paper, the authors try to overcome the issue over long training times needed for training deep learning models in medical image analysis. The authors focus on a 3D-segmentation task that is known to be memory intensive for both GPU and CPU. They suggest a smarter caching method that keeps track of the false positive and false negatives. They compare their results to a baseline without a smarter cache. Overall the paper is well written and nicely structured. The methods are clearly described. The authors focus on reducing training time for researchers in the field of medical image analysis with the possibility to scale further. Some areas need some clarification, see below. - Could the authors give some insight how much time is spent on loading images from disk, augmentations, CPU time, etc. - Could the authors give some details on what kind of system they are using, what kind of CPU, what kind of HDD/SSD, how much memory etc. This can influence the amount of disk IO. - The smart caching method keeps track of the FP, FN, TP and TN's after a validation run. How does this compare to on the fly hard negative mining? - Can the authors explain how this would scale for a multi-class problem. - Could the authors compare their results to a network where i.e. only AMP or NovoGrad is applied. - Could the authors explain how the increased number of GPU's ensures that there is less disk IO. Since there are more GPU's that need to be fed with patches, and therefore increasing memory footprint. - Minor: the references to tables and figures are inconsistent. Some tables/figures are never referenced and the order is also not always correct. Please address. The paper discusses an important item that all researchers in the field however, some critical points are not clearly described in the paper. It's not clear on what kind of system the authors tested the solution, it's not clear how the technique differs from hard negative mining. """,3,1 midl20_85_1,"""The paper proposes a multi-task learning framework for MRI reconstruction and segmentation from under-sampled k-space data. Results indicate that fairly accurate segmentations can be obtained already with highly under-sampled data. The proposed method is compared with two variants, which indicate (1) end-to-end learning of both tasks and (2) sharing encoder features is beneficial for segmentation performance. * The paper tackles an important, but relatively under-studied problem of doing segmentation from under-sampled k-space data. Such a method could have high potential in accelerating personalized MR sequencing. * For the methods which are compared against, the experiments are carried out in a structured manner and the results well-presented. * The main methodological novelty of the paper is a change in the MTL architecture, where the skip connections are from the reconstruction decoder to the segmentation decoder, instead of from the shared encoder to the segmentation decoder. However, the importance of this change is only verbally motivated, but not experimentally demonstrated. I think it would be important to compare against a 'naive' MTL architecture, with a shared encoder and skip connections from the encoder to both decoders. * I am a bit confused as to what is the main goal of the paper. Is it to obtain segmentations from under-sampled k-space data? Or is it to get both segmentations as well as reconstructions? If it is the former, I think it would make sense to check if a CNN can directly segment the under-sampled image, if trained like this in a supervised manner. If it is the latter, it would be important to compare with at least one of the several methods in the literature proposed for reconstructing under-sampled MRIs. With a relatively small methodological novelty, I think this is a mainly validation paper. In this case, appropriate comparisons with related works (as mentioned in the 'weaknesses') are important for acceptance, in my opinion.""",2,1 midl20_85_2,"""The authors proposed a TB-recon network to perform image reconstruction and tissue segmentation simultaneously from undersampled DESS knee MRI rather than treating them as separate problems. The unique aspects of the proposed network are (1) sharing the encoding path (which is not new) and (2) introducing intra-task skip connections in the decoding path allowing information flow between tasks. The authors compared the proposed network by solving the two tasks sequentially independently (cascade-rec-seg) and jointly (end-to-end cascade-rec-seg). Experiment results showed that the proposed network significantly outperforms the other two alternatives. The problem is interesting, and the authors proposed a comprehensive technique to tackle it. One important finding of this paper is that by performing image segmentation and image reconstruction at the same time actually improves the quality of both tasks compared to performing them independently. It is nice to musculoskeletal imaging trained MDs to evaluate the actual quality of the image rather than just depend on evaluation metrics. Having the intra-task skip connections is an important novel aspect of the pipeline. However, there is no experiment demonstrates the importance of these connections. It is possible that the network will perform well without these skip connections. The authors us the same set of hyper-parameters (weights between losses) optimized for the proposed work in the end-to-end-cascade-rec-seg experiment, which may not be optimal to the latter approach. Indeed, the image reconstruction and segmentation quality are much worse even compared to the cascade-rec-seg experiment. The authors claimed that its sub-optimal performance is because it is much more computational demanding, which I have trouble understanding. Lack of comparisons with other state-of-the-art approaches. Although there are some weaknesses, overall, the paper is well written and well organized. The problem is interesting, and the results are promising. If the authors can address the comments sufficiently, I would recommend the acceptance of this paper.""",3,1 midl20_85_3,"""The authors used a multitask learning approach for simultaneous reconstruction and segmentation on structural knee MRI. The proposed model was compared to two cascaded reconstruction and segmentation networks. The results suggest that the multitask approach improved both reconstruction and segmentation tasks. Unlike general multitask approaches where the tasks are separated in late layers, the authors designed the CNN architecture sharing encoding path but separate decoding paths with skip connections which is the strength of the paper. Comparative studies need to be performed better as in its current form they do not provide a clear indication of why the proposed method provides better results. One of the main design criteria to consider is the change in the number of initial feature channels and layers. The number feature channels are 16 on the proposed network, 6 in cascade-rec-seg and 8 in end-to-end-cascade-rec-seg. The reason was identified as the GPU memory limitations but it is not clear why it was not kept e.g. as 6 for all the networks. Moreover, 2 level encoder-decoder is used in the proposed method but it was changed to a 4-level version in other models. These changes in the comparative study make the results questionable. Since both recon and segmentation is learned in this study, it is not clear how the segmentation only model - used after a traditional reconstruction using grappa or sense type of approaches - would compare to the results presented in this work. There are so many changes in the architectures for the comparitive study which would directly affect the results presented in this work. These types of changes need to be minimized to make the results convincing. """,2,1 midl20_85_4,"""Standard imaging analysis pipeline does imaging reconstruction and segmentation in a sequential way. As a result, high-quality image are usually needed to improve segmentation results. This paper introduced a novel framework for simultaneously image reconstruction and segmentation using a CNN-based method. The CNN-based method imposed the same encoding path for the image reconstruction and segmentation tasks. The image reconstructions and segmentation path has the shared feature embeding. Moreover, the network was trained using an objective function that combine the reconstruction loss and segmentation loss. Experimental results show that this method reduces false positive and false negative segmentations than the cascade-rec-seg and end-to-end-cascade methods. The weakness of this work was also reasonably adressed. I don't see obvious weakness of this work besides the comments mentioned by the authors in the paper. I suggest that the authors can revise Figure 3 to zoom into the details so that they can be seen more easily. It is a very reasonable application for a form of multitask learning (MTL) to solve the problem of simultaneous image reconstruction and segmentation. This method has better performance than the cascade analysis methods. """,4,1 midl20_86_1,"""Pros: - Well written - Clearly motivated - Quality and clarity of the presentation are great - Experiments are conducted on a very large dataset Cons: - Unclarities in methodology (1): for n volumes, you end up with n-1 subtraction images. How can you multiply these elementwise with n volumes? - Unclarities in methodology (2): How is MC dropout applied, or where is dropout placed in the network? - Originality: The way the focus was put in this work forces me to question the novelty. I'd have loved to see more details and justifications on the design choices of the network input - Data (Minor): Why was T2 used instead of FLAIR? - How would one determine a threshold on uncertainty, which commonly is not between 0 and 1, on a test set for which training set uncertainties is not known? - What precisely is meant by ""at reference""? - What exactly is the output of the 3D Unet? One segmentation volume, or 3 of them? - The reported metric: ROC and AUROC are not suitable for my point of view in this context. I'd assume that lesion and background pixels are heavily imbalanced, which calls for the Precision-Recall-Curve and the respective area under it.""",2,0 midl20_86_2,"""Quality and clarity: The short paper is well-written and easy to follow. Significance: The presented work is very similar to the work of Nair et al., 2020. The difference is mainly the modified segmentation task. Due to the similarity of the problem statement, the methods used, and the significance of the reported result (and the fact that this work is not introduced as a short paper of an existing publication), the benefit for the readership is limited. Pros: - The work is well-motivated, and the short paper nicely introduces the problem. - By focusing on the assertion confidences, this work addresses a critical issue with regards to the clinical integration of DL approaches. Cons: - The management of the available space is poor. Space limitations are mentioned as a reason not to show additional results. At the same time, a large figure of a 3D U-Net architecture is presented. The benefit of showing architecture details is minimal, especially because the essential information about the dropout locations is missing. I would prefer seeing additional results than the network architecture. - The benefit of the proposed approach is unclear. The work mainly shows that ignoring uncertain voxels leads to improved results. As I understand, to observe such a benefit it only requires that some FP/FN voxels express uncertainty, which should be the case for any uncertainty estimation method. The method should thus, at least, be compared to the standard softmax probability (or entropy) output of the network to assess the benefit of the proposed method. -In the abstract, the work claims: We explore whether MC-Dropout measures of uncertainty lead to confident assertions when the network output is correct, and are uncertain when incorrect [...]. I do not see how the results support this claim since the evaluation only requires that the certain voxels are correct (as correctly mentioned in the conclusion). Minor: - In the text, the ROC is defined as TPR vs. FPR, whereas in the plot it is TPR vs. FDR. - Baseline (in Figure 3) is not explained. I assume it to be the absence of an uncertainty threshold.""",2,0 midl20_86_3,"""Summary: This paper explored the uncertainty measuring in lesion segmentation task. U-Net architecture is used as backbone network. Monte-Carlo Dropout approach is used to measure the uncertainty. New or enlarging lesion is the main segmentation target, thus subtraction images are also feed to network. Props: The authors accomplished a complete NE lesion segmentation task using 3D U-Net architecture, and incorporated MC-dropout approaches to measure the uncertainty. Whole paper is well written, easy to follow. Cons: 1) This work is more like a reproducibility of (a part of) previous work (Nair, MIA, 2020). The major difference is validation dataset. Personally, I think this paper lacks novelty. The authors emphasized the segmentation task for NE lesion is challenging, but they did not give any support for this claim. 2) Since the authors claimed ""we develop a modified U-Net"", the modified part should be well explained. I cannot find any major difference with the original U-Net architecture except for the input data. 3) More details should be well explained in limited pages. Such as network architecture, detailed filtering. 3) Typo and grammatical errors need to be fixed. Such as 'a test set', 'Figure 3.', ""t"", 'follow the same procedure followed by...' Comments: I think the author can go deeper in uncertainty measuring tasks, not just changing the dataset. As claimed in the abstract, ""... thereby permitting their integration into clinical workflows and downstream inference tasks. More deeper in downstream clinical workflows would be interesting than re-validation of previous works.""",2,0 midl20_86_4,"""The paper uses MCDO for uncertainty estimation in the segmentation of 'New/Enlarging' brain lesions. it is well written and the problem statement is clear. With the use of uncertainty information, they leap from deterministic segmentation, which can be perilous in the given medical context, to a probabilistic approach. The validation is valid and the results are promising. However, in an extended version of the paper, I would love to see a more comprehensive list of methods for uncertainty estimation. MCDO is not the only one and it has its failure modes. From a decent comparison of such methods, we can learn more about the nature of this important medical problem as well as the performance of other methods on this real-life application. One thing I can brag about is the use of a proprietary dataset. It sounds like a good collection but closed-sourceness of medical data gives me a bad taste, always. In my opinion, proprietary datasets, file formats, etc... hinder the progress overall. In summary, the paper is of interest to the MIDL audience and could benefit from further discussions. """,3,0 midl20_87_1,"""The authors propose a deep learning architecture based on the well established U-Net architecture where they propose to stack neighboring images of the image to segment to add context. They evaluate their method 21 images of type B aorta dissection and compare their results with the state of the art techniques. * Good comparison with other state of the art methods * Clear explanation of the problem * Discussion on the limitations of the method. The authors talk about the problem of manual annotations coming from different users. This is an interesting point. I wonder if they experimented with this. * The contributions of the paper should be better explained as there are very subtle differences with U-Net. It should be made very clear what makes this method different from U-Net: 1) The added value of 3D SU Net is not clear. Adding the fourth channel seems trivial as has no real effect. 2) Isn't the 2D SU Net a sort of variation of the standard 3D UNet? * The ground truth is obtained from an automated tool. Why don't just using it? I think this paper is well-written and has some But the motivation has a flaw: the authors are using an already existing tool to obtain their ground truth automatically. Some times this fails and then it has to be corrected. Is there then a need for the proposed method? I assume that, if the final segmentations were gonna be used, they would have to do some manual corrections as well. Why not staying with what is there? The authors should quantify the advantages between the new method and the commercial tool.""",2,1 midl20_87_2,"""The authors propose to segment aortic dissections in CTA images using a U-Net. To provide more consistent segmentation results, the U-Net takes in multiple slices at the same time as separate channels. Results look good for the proposed method, and true lumen and false lumen are separated quite well. - The paper is quite well written and figures look nice. - Results for the SU-Net look good. - Segmentation of aorta dissections is a clinically relevant problem. - The authors provide a comparison to other CNN architectures. - The main contribution of the paper seems to be to use stacks of slices as inputs to a CNN instead of single slices, to enable smoothness between segmentations in adjacent slices. This is not a very novel idea, which has been extensively studied before 3D CNNs became widely used. Moreover, by using these slices as channels, valuable signal information along the z-axis might be missed. - The authors used a small data set of 21 patients. The proposed method is compared to a 3D U-net and V-net, but the input sizes for the 2D and 3D networks are very different. The 3D networks take 128x128x128 voxels as input, while the 2D networks only take a couple of slices as input. Hence, the chance of overfitting is a lot larger for the 3D networks. Results in Table 1 and Figure 5 are surprisingly poor for the 3D networks. A fairer comparison would take same-sized inputs for the multi-slice and 3D networks. - Results in Fig. 4 are not very convincing, why is there a drop in performance at 5 slices? Were these experiments repeated multiple times with different seeds? The results look nice, but I think there is little novelty in the proposed method. The paper could have value as a systematic comparison of slice-stack inputs to 3D inputs, but I doubt whether the 3D CNNs were used correctly. The results that the authors provide here for 3D methods are very poor. """,2,1 midl20_87_3,"""The ""stacked U-net"" (""SU-net"") is just a regular U-net applied to multiple slices at once. This is typically called 2.5D input. The only very minor difference is that they also let the network output several slices (but only use the middle slice at inference time). Apparently, the authors were content with their results, but they're not evaluated on a public dataset, so it is hard to judge whether the comparison is fair. The outputs of the methods compared with look surprisingly bad to me. The paper is well-written and easy to understand and contains useful illustrations. The task & dataset is interesting, but unfortunately neither large, nor public. The authors have put less important things into an appendix. The SU-net is hardly novel; it is mostly a standard U-net (with reduced numbers of filters) in a ""2.5D"" setting. In my personal experience, such an approach often does not perform better than a 2D U-net, let alone properly trained 3D U-nets. The multi-slice loss is a stronger supervision than the common one, but the difference is not separately evaluated. Apparently, the authors used padded convolutions (they do not describe padding procedures, which would have been very important for the 3D SU-net when applied on 3 slices only), which is not a good choice for segmentation tasks, since the results depend on the position within the image. The authors write that ""[3D U-net and V-net] input whole volumes of medical image data "" which I interpret as an additional clue that they are not familiar with a proper overlapping tile strategy. (U-nets are fully convolutional and can therefore be applied to properly padded subregions of the input image, while allowing to stitch together their outputs to a complete result without any artifacts from this tiling.) The authors claim ""significant"" improvements, but no statistical tests were performed or described. Some space is wasted by plotting both DC and JC. For some reason, CT slices without aortic dissection were discarded (instead of using them as negative examples). - The contribution is very minor. - The paper is somehow borderline between weak and strong reject for me, because there are not too many peer-reviewed papers on TBAD segmentation yet. On the other hand, some of the preprints and reports I could find contain more technological novelty than this one. - There are some technical flaws that limit the paper's contribution.""",1,1 midl20_87_4,"""The authors work on aorta segmentation from computed tomography angiography with focus on aortic dissections of type B, where the aorta decomposes into two channels, the true and the false lumen. They present a variant of the well known U-Net where instead of one slice a stack with extra added neighboring slices is passed to the net. These neighboring slices deliver some volumetric contextual information and thus improve segmentation performance. The method is compared to other CNN segmentation approaches on 21 scans with manual ground truth segmentations and shows better performance than these. The paper is well written, easy to read and its overall structure is sound. The accuracy measures comprise of set-theoretic ones (Dice and Jacard coefficient) as well as distance based ones (Hausdorff distance) allowing for a more holistic assessment of segmentation performance. A parameter study w.r.t. number of neighboring slices was carried out to justify the respective choice. Method: The proposed variation of the U-Net is incremental and -more important- the idea lacks novelty to some extend. Please see: Ambellan, Felix, et al. ""Automated segmentation of knee bone and cartilage combining statistical shape knowledge and convolutional neural networks"" Medical image analysis 52 (2019): 109-118. Ambellan et al. are also employing neighboring slices within a U-Net framework (see Sec. 3.1) with a slight difference w.r.t. the output of the net. The proposed net outputs a segmentation mask for the whole stack, whereas Ambellan et al. output a mask for the center slice only. However, for testing the proposed method also utilizes the center slice only and discards the others. Validation: The authors mention the work of Cao et al. Cao, Long, et al. ""Fully automatic segmentation of type B aortic dissection from CTA images enabled by deep learning."" European journal of radiology 121 (2019): 108713. in Sec. 1.1 (Related work) several times. Amongst others they state: '...In the work by Cao et al. (2019), the authors used a 3D U-net to segment the true and false lumen...'. In Sec. 4.1 (Study limitations) the authors again refer to Cao et al. who wrote that different experts achieve a dice coefficient of 0.92-0.94 when segmenting the same data. Reading this I wonder why the authors don't discuss the results of Cao et al. and/or compare to Cao et al.'s method directly (the 3D U-Net used in the comparison is not the one of Cao et al.), since they work on exact the same task. The reported dice coefficients of Cao et al. (0.93(+-0.01) for the whole aorta, 0.93(+-0.01) for the true lumen and 0.91(+-0.02) for the false lumen) are superior to the highest given in the proposed manuscript (0.92 w.a., 0.82 t.l., 0.87 f.l.), especially for the true/false lumen. This should not be ignored. The paper comes with two major weaknesses that concern both aspects of the contribution. For the methodological one I hardly see any quick fix and the applicational aspect would at least require some additional evaluation that does not fit the review/rebuttal time frame, consequently my rating has to be a strong reject.""",1,1 midl20_88_1,"""The paper utilizes a pyramid attention approach to localize landmarks for Cephalometric X-RAY applications. The authors utilize a ""glimpse"" approach to better localize the landmark locations. The authors leverage a publicly available dataset which enables comparison with similar papers reported in the literature. The authors report better results than other methods proposed in the literature. Furthermore the results reported are within the interobserver variability. The attention approach proposed is novel and the paper is well written. Please provide more information on the UNET implementation. How many layers deep? How many filters? Did the authors utilize any regularization approach? What dataset did the authors utilize to estimate the optimal threshold from the ROC curve? Based on the results presented on the test set the deviation was 5 degrees. What is the clinical threshold that can change patient management? Is there any information on the interobserver variability? The authors need to provide more details on the implementation of the UNET architecture. Additionally the authors need to further explain if they used the training-validation set to select the optimal threshold. """,3,1 midl20_88_2,"""Authors proposed an interesting approach for landmark localization, where instead of widely accepted heatmap prediction of landmark position, they regress the relative displacement of the centre of the image patches to the landmark location. This cascade refinement itself is not a new concept since it was heavily used in regression random forest. Moreover, the only difference compares to the estimation of landmark coordinates in image space is the use of the patches as an input to CNN instead of the whole image. Since different works have shown that regression of the coordinates is less effective than the heatmap regression, authors argue their approach with less memory consumption as it works with patches instead of the whole image. To achieve a large receptive field they simultaneously process the image on a different scale, i.e. pyramidal approach. 1. interesting combination of pyramidal approach and iterative refinement used for landmark localization. 2. The method is evaluated on an ISBI challenge dataset (scall 2D X-ray). 3. SOTA results are presented 1. The authors used a pyramidal approach to generate specialized features. There is no experiment to show that a pyramidal approach is actually beneficial. For example, the proposed work has many similarities to RL landmark detection approach (Alansary et al., 2019) and they use a single scale approach. The advantage of the pyramidal approach could be demonstrated by comparing the results with the model using a single glimpse at e.g. resolution I_{N/2}? 2. Due to the similarity, the method could also be compared with RL approach, as the code of the Alansary et al. is publicly available. 3. During training, the predicted position of the landmark is used in the iterative loop to initialize the new patch position. This dynamic selection of the patch position during training is repeated 10 times. As the training on the patches in an iteration loop is independent, it is not clear why authors used this patch selection technique. Namely, the prediction of the landmark position at the beginning of the training is random and at the end of the training is accurate, so it can be assumed that trained network will be fine-tuned on the patches around the GT landmark position. This might also explain why during training 10 iterations are required and only 3 during testing. Did authors try to sample patch positions from normal distribution around the landmark, and trained the network on each patch independently (i.e. without t=1 to 10 do loop)? 4. Why do authors sample the position of the starting patch from normal distribution around the centre of the image rather than randomly from the image? 5. Understanding Section 2.2. Spatialized Features is very difficult. Since this is a technical contribution of the manuscript, authors should consider rewriting the section. Perhaps generating an additional figure instead of Fig. 1 Right, while combining Fig. 1 Left and Right into one (i.e. use the real cephalometric image instead of blank image I_1 to I_N ). 6. Unlike heatmap based methods, where the output can be a single landmark or multi-landmarks prediction, in the proposed approach it is not clear whether a multi-landmark approach is possible. This could be a problem for multi-landmark localization task with a large number of landmarks. Can the authors comment on this? 7. The authors used an average of the senior doctor (doctor 1) and junior doctor (doctor 2) as GT labels. This is not the same GT in the Lindner at al (2016). They used the annotation of junior doctor as GT. Also, the result values of the Zhong et al. (2019) are not the same as reported in the original paper. Why the author did not include results on dataset Test 2? 8. The statement This intrinsic augmentation and transfer learning efficiency help explain the effectiveness of our approach even with a very small training set of 150 images. is not experimentally confirmed. Authors should either remove the sentence or compare results in the experiments without intrinsic augmentation and transfer learning. Minor comments: 1. The regression target pseudo-formula is named error. Displacement or offset might be a better expression. 2. Why the authors used only green channel and not the grey image as an input to all three channels? 3. It might be better if authors use image size instead of resolution in the following sentences. The first level I_1 is full resolution (the original image), and N is set so that the resolution of I_N is approximately the resolution of a glimpse (64 64). For the x-ray images (resolution 24001935), N = 6. I am strongly recommending the manuscript for acceptance due to the interesting technical contribution encapsulated around the pyramidal approach and iterative refinement, as well as the SOTA results on a challenge data set. """,4,1 midl20_89_1,"""This paper aims to improve image translation for the task of plague segmentation, by building an approach which can preserve small/ subtle structures as well as global and intermediate structures. The approach makes use of UNIT (Liu et al., 2017), an image translation network with a single shared latent space, and a self-expressive loss (Ji et al., 2017), which helps with separating the subspaces and cross-domain translation. Pros: The authors explain the task and motivations well, and validate against another well known translation network, UNIT. The results are promising, with improved performance compared to UNIT. A small decrease in performance is shown in comparison to training on real images, which is to be expected. Cons: The method can be more thoroughly validated, and a more detailed illustration of the network can be given. """,4,0 midl20_89_2,"""This paper proposes to add a self-expressiveness regularization term to learn a union of subspaces for image-to-image translation in medical domain. It's shown that such self-expressiveness constraint can help to preserve subtle structures during image translation, which is critical for medical tasks, such as plaque detection. The motivation and methodology are well explained with proper reference works. Improvement on plaque detection is signification. Comment: It would nice if the authors could also show some visualisations of the latent space, with comparisons between with and without the constraint. This will provide more insights or explanations.""",3,0 midl20_89_3,"""Summary: This short paper discusses a method for cross-domain plaque detection using image translation methods, translating from pre-contrast to post-constrast CT. The method adds an additional constraint to the learning objective to force the translation model to learn a representation constructed of easily separable subspaces. The authors suggest that this allows the model to better represent the different structures in the images. Their experiments show a decreased drop in performance over competing methods. Strengths: This is an interesting application of self-expressiveness loss and the reported results show that the proposed method might achieve a reasonable cross-domain performance. The experiments are very limited, but seem promising. Weaknesses: The paper is not very generous with information about the implementation of the method: we are told nothing about the encoding and decoding networks or the segmentation model, for example. The comparison with competing methods is very limited. The paper seems to depends heavily on work by Liu et al. It is, of course, a short paper. Questions: The main assumption of the authors is that different anatomical structures would be represented by different subspaces in the representation. It would be interesting to know if this is really what happens in the model, or whether the proposed method improves in other ways.""",3,0 midl20_89_4,"""The authors propose to extend the UNIT model for image-to-image translation and apply this to synthesis of non-contrast CT images from contrast-enhanced CT images. Subsequent experiments show that aortic calcifications can be automatically identified in the synthetic non-contrast images, but not in the original contrast images. Strengths - Its very strong that the authors not only perform image-to-image translation, but also evaluate the effect of this translation on a subsequent segmentation task. - Good experiments, comparing to a situation without image translation, and one with single subspace image translation. Quantitative results are convincing. - Results suggest that the proposed approach would allow automatic aortic calcium scoring in contrast-enhanced images without the need for annotated training data in these images. Weaknesses - Quite information dense paper, its not entirely clear what was exactly the contribution of the authors is, e.g. subspace clustering seems to have been proposed previously, as has the UNIT model. - The individual models could have been explained a bit better, a diagram would have been useful. - Its unclear what the contribution of the small patches is, it would be interesting to visualize the subspaces using these small patches. Moreover, its unclear how the number of patches N is determined. - Some typos: for simplicty, one or more subspace.""",4,0 midl20_90_1,"""The paper proposes a modified BigGAN for histopathology images. Many experiments have been conducted with small patches. The paper is not structured well and is difficult to follow. Asking pathologists to find the fake images may not be the best way to evaluate the approach. I am not sure what to understand from the ROC curve. A new BigGAN for an interesting field; many experiments. A relatively large number of patches used for evaluation. Different metrics used for evaluation. Pathologists involved in validation which adds value. - the paper seems to be hastily written - paper not structured well - figure positions within the text do not help - experiments not clearly analyzed - many results in the appendix - explicit applications not shown/motivated Good paper but not structured well; experiments may need more analysis The paper seems to be hastily written which results in bad organization and not fully analyzed results. Pathologists involved in validation which adds value.""",2,1 midl20_90_2,"""In this paper the authors presented a method for using GANs on digital pathology images to represent and generate cancer tissue generation. It is a very neat idea of generating tissue images in H&E domain.It is critical to be able to see the transformation of tissue from benign to malignant. However I think authors missed a couple critical opportunities because of the way they designed their experiments and the problems they wanted to solve. -Applying GAN on digital pathology images would help understand the disease progression mechanism at tissue level, which is very critical for this domain -The proposed method has powerful interpretation of latent space between benign and malignant -The auto generated fake images are looking reasonable -The method used for testing the quality is adequate -The critical tasks in pathology are not usually determining between benign and malignant tissues, that is usually an easy task for pathologists. To be able to show the power of GAN in generating latent space for borderline scenarios (e.g., atypia) would be a better approach. Only by representing these borderline cases the critical questions might be answered by pathologists -The experiments done on pathologists are not adequate. Experimental design is not deterministic. Asking pathologist to find the fake image within 25 images of low resolution images is not fair. The pathologists are used to look tissue under microscope. -Need more people to do experiments, maybe the subjects can be 5 pathologist and 5 non-pathologists will allow more clear comparison Although the experiments are not adequate the method would be a powerful tool to understand the disease mechanisms from digital pathology image. The method could be improved further by including borderline cases and more participants.""",3,1 midl20_90_3,"""In the paper, the authors proposed a GAN application that allows extracting features from images and shows/ understand differences between cancer and benign areas. The FID metric was used to evaluate the quality of the generated images. The presented method is interesting. The GAN application in Digital Pathology becomes popular last time. The paper presented an interesting idea of the GAN application. The method is well described and easy to follow. Authors present pathologist evaluation as well they used the FID metric to evaluate the quality of generated patches. The paper presented an interesting idea of GAN application however it is not clear how it can be used in the future. The abstract of the paper is confusing. The authors used data extracted from ~600 TMA. However, in the abstract, they present a number of patches that is useless and can be confused. A number of patches that were used for training can be easily increased by additional augmentation and is not representative for readers. In that place, the author should focus on a number of original data (WSI or TMA). Presented work is interesting and novel. The main advantage of the paper is two folds evaluation : (a) by FID metric and (b) by pathologists. The method is well described and easy to follow, however, it is not clear how it can be used in the future.""",3,1 midl20_90_4,"""Authors presented a GAN based pathology image generation method that combines ideas from BigGAN, StyleGAN, and Relativistic Average Discriminator. They aim to disentangle the style and content via StyleGAN and increases the generated image quality via RAD. To illustrate the effect of individual model choices, they conducted rigorous testing and present visually pleasing results. They also illustrate space arithmetic examples that look impressive. The paper is overall very well written and the ideas introduced in the paper is well explained. Authors explained their specific algorithmic decisions via both referring to the related literature and also justify them via experiments. In this sense, I think the study is very rigorous The visual results look very good. Illustration of latent space arithmetic is very good. The application is important in medical imaging where their is a scarcity of data. But more importantly, in m opinion, defining such latent arithmetic is important in understanding the behavior of the models and making them interpretable. Mixing regularization is not explained in detail. The paragraph after equation (2) and (3) is not as clear as the rest of the text. The authors did not clarify why they picked layers 1 and 6 as the points of input entry to the generator network. The claim ""style mixing regularization encourages the generator to localize the high-level features of images in the latent space"" is not explained well and supported by experiments or proper citations. It is not clear, why did the authors use 2 different latent representations (z_1 and z_2) rather than one as in the original StyleGAN paper. Last but not least, Figure 2 does not illustrate the entry of 2 inputs to the generator network, so it is a bit confusing. The diagnostic impact should also be measured. Did the authors conduct a study, where they generated benign and malignant patches and asked the pathologist for their diagnosis? I understand that pathologists, most of the time, do not make a decision over a limited field of view patch; therefore, this experiment can be biased or hard to conduct. The authors can design a test setup using latent space arithmetic (like figure 6) and ask pathologist reading. It would also be interesting to see where the pathologist changes their decision from benign to malignant in the series of images in Figure 5. The authors presented a thorough study with rigorous testing and visually pleasing results. They well-justified their claims via experiments. The application is important in medical imaging where their is a scarcity of data. But more importantly, in m opinion, defining such latent arithmetic is important in understanding the behavior of the models and making them interpretable and predictable. """,4,1 midl20_91_1,"""The authors propose a neural network approach that aims to generate signed distance functions. They use a U-net type encoder-decoder to expand and then collapse the feature layers as the network architecture. They use the a three dimensional cochlea shape model represented by 4 parameters (representing the longitudinal and the radial extents of the centerline). Their approach leads to a 60 times increase in computation time compared to signed distance function generation methods. This paper makes a good but very restricted contribution to the generation of signed distance functions. Since the paper used a cochlear dataset that essentially was described by 4 parameters, is the neural network exclusively useful only on such type of data? The generalizability to other signed distance functions is in question. The authors should provide more evaluation results on the 100 test datasets. For e.g. dice coefficient or the Jacquard index between the signed distance maps would be useful. The method yields good results on 9 clinical datasets (previously unseen by the model). In summary, the paper makes a good contribution of learning signed distance function parameters for the cochlear dataset, but its applicability to other datasets has not been demonstrated. I would add that the mapping from a parametric mesh to a signed distance function is more of a topological map than a geometric map. Thus it would be useful to discuss the input data (only based on 4 parameters) and the features that are implicitly getting learnt from the topological representation. """,2,0 midl20_91_2,"""This paper suggests replacing the computationally expensive step of computing distance maps with a neural network. The network is trained to generate a 3D distance map from 4 parameters of a shape model (therefore bypassing the mesh representation). Neural networks are here mainly considered as fast approximators rather than actual predictors, which is a rather original and interesting approach. My main remarks: - The comparison to a naive VTK algorithm is a bit unfair. The computation time of the mesh-based SDM seems particularly high. There are also faster methods that compute an approximation of the SDM. - Based on the description of the training/validation splitting, there is no guarantee that the validation parameters are not included in the training phase. I am aware that it does not matter that much for the final application (which is just to reproduce a given shape model so there is no real risk of overfitting), but it still biases the evaluation. - There seems to be some border artifacts visible in Figure 2. (Left-I), which are probably due to some padding operations in internal computations of the U-Net. Usually this is not a major problem since segmented objects are rarely touching the boundary, but here it seems that it does change the topology of the yellow curve so it might be worth fixing. Should the authors decide to make it a longer paper, here are a couple of ideas that could be considered: - Here only 4 parameters are considered as an input of the network. For many applications, this is not enough to get an accurate shape model. It would be interesting to see how well the network is able to approximate the shape model as the number of input parameters increases. - One of the core properties of distance maps is that their gradient is always one. The deviation of the network outputs gradient to 1 would be another metric worth reporting. - Taking this idea one step further, this could even be implemented as part of the training loss, so that this property is enforced onto the network. Minor remarks: - The width of the first layer 245760 seems a bit arbitrary - can you quickly mention how this was chosen? Also it is a bit weird because it is not resizable to 64 x 64 x 64. - Please specify the time unit in Table 1. - Typo L40: this prior work - Typo L41: resources - Typo L95 recover - Typo L101: a deep learning [...]""",3,0 midl20_91_3,"""This paper proposes a new U-net like architecture for fast generation of signed distance functions (SDF) from parametric surfaces. A mapping is learned between surface parameters and the corresponding SDF on synthetic data, and is compared to a vtk class performing conventional SDF calculation using Eikonal solving. The paper is well written and fluent and the methodology is very sound. The authors show a good correspondence between the two methods, with a much lower computational time when inferring SDF using their approach. The authors show almost perfect correspondence between conventional SDF calculation and their approach on a 9 surface clinical cochlear dataset. One could ask how the network learned so well with synthetic training examples. The choice of sampling uniformly over the parameter space seems very suboptimal. In general, it is likely that some parameters have more influence than others on the output shape. I understand this is a short paper and that this could not be discussed.""",3,0 midl20_91_4,"""Quality: Study well designed overall. -Clarity: Paper clearly written. Fig 1 can be misleading when showing layers stacked vertically. -Originality: -- This paper does not deal with medical imaging, but only with shape encoding. --Two recent works [1,2], not cited by this paper, have proposed a CNN-based solution for computing Signed Distance Functions, one even learning how to represent different classes of shapes [1]. Ref: [1] Park JJ, Florence P, Straub J, Newcombe R, Lovegrove S. Deepsdf: Learning continuous signed distance functions for shape representation. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2019 (pp. 165-174). [2] Chen Z, Zhang H. Learning implicit fields for generative shape modeling. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2019 (pp. 5939-5948). - Significance of this work No clear gain demonstrated versus existing methods to compute SDF. This is more a ""feasibility"" study. I do not fully agree that complexity of SDF computing with classic methods (e.g. fast marching) is ""a bottleneck"" Also the cochlear shapes are simple and smooth, modeled as ""generalized cylinder around a centerline having four shape parameters"". This greatly limits significance. Finally, SDM of size 50 50 60 can seem quite small to encode more complex shapes. -Pros and cons: Pros - Clear, well written cons: - This paper does not deal with medical imaging, but only with shape encoding. - Shapes being studied is smooth and simple. """,1,0 midl20_92_1,"""This papers works on left-ventricle segmentation[1], using FCN and a roundness prior. To not only enforce the roundness prior, but to integrate it into the training of the FCN, the authors train an auxiliary network that approximate the output of a Dynamic Programming (DP) algorithm that solves the roundness prior. The gradient of this auxiliary network can then be propagated to the original network performing the segmentation, allowing it to ""integrate"" the prior. One drawback of the proposed method is the requirement to know the center of the object at inference time. The results are evaluated on two public datasets (LVQuan2018 and ACDC), give some improvement when there is very few annotations. [1] And, for that matter, could be applied to any segmentation problem with pretty round objects. - The prior is quite interesting, and very well explained - The evaluation reports several metrics, on several public datasets - The paper is overall well written and easy to follow - The ablation study on the number of training slices is interesting and very welcome - The results with U-Net are actually on par starting at 50/100 annotated slices (so, around 5-6 patients for ACDC, which is very few). Given the extra computation time introduced by the method (60% penalty at training), it quickly becomes worse than U-Net (efficiency wise). - This is even worse if you take into account the need for object center at inference (more on that later) - The auxiliary network is still required at inference time, which adds some complexity. It also requires some manual annotations to be present. - Having the auxiliary network at inference time, to me, doesn't make the whole thing ""end-to-end"". This is debatable, but what I would call end-to-end would be if we used only the trained U-Net at inference. And I think it would work, my guess is that the auxiliary network adds little value once the network is trained. I overall like the idea, but the results are not convincing. While some issues are easy to fix (simply don't use the auxiliary network at inference), the fact that the performances are on part with a U-Net starting at 50/100 annotated slices (_slice_, not patients) is a no go. To me, it makes the extra cost of the method not worth it. But I do not think the method should be discarded. The authors show quite convincingly that it is possible to approximate and integrate a DP algorithm during the training of a FCN. I think that they didn't choose the best possible datasets to shine ; as LV segmentation is quite simple to begin with (RV, on the other hand..). I invite the authors to evaluate their method on harder tasks (perhaps prostate segmentation, PROMISE12 is a good dataset for that), where there is more room for improvement.This would justify the extra computation burden better.""",2,1 midl20_92_2,"""- In this paper, the authors proposed to tackle the problem of cardiac segmentation. - The basic idea is to incorporate prior knowledge into the deep networks, such that the model can still perform reasonably well even under a low-data regime. - Specifically, the authors propose to replace the non-differentiable modules (dynamic programming) with a differentiable general function approximator, e.g. small deep network, and treat the gradient from this approximator as the gradient for the back propagation, updating the backbone networks. - Experiments have shown the effectiveness of this idea on the ACDC (Bernard et al., 2018) datatset and the LVQuan2018 dataset. - The paper is well-written, and the problem is well-motivated. - In medical image analysis, I definitely agree, we should not throw away years of research on the traditional methods that are mathematically solid. - The experiments have demonstrated the effectiveness of the proposed methods under a low-data regime. - Missing reference on MRI segmentation: Vigneault et al. ""-Net (Omega-Net): Fully Automatic, Multi-View Cardiac MR Detection, Orientation, and Segmentation with Deep Neural Networks"", in Medical Image Analysis, 2018 - Although I like the idea proposed in this paper, but in fact, this is not a surprise to me for people to come up this idea, check the following paper: Engilberge et al. ""SoDeep: a Sorting Deep net to learn ranking loss surrogates"", in CVPR2019 In this paper, the authors used a differentiable module (RNN) to approximate the ranking function, which is non-differentiable, non-decomposable, and use the gradient to further update the backbone networks, so that the entire network can be trained to optimise the tasks which require ranking, e.g. Average Precision. - Missing experiments comparison on differentiable dynamic programming: Marco Cuturi. Mathieu Blondel, ""Soft-DTW: a Differentiable Loss Function for Time-Series"", in ICML2017 Arthur Mensch, Mathieu Blondel, ""Differentiable Dynamic Programming for Structured Prediction and Attention"", in ICML2018. - I like the idea of using synthetic gradients, it's simple, and I can see it has the potential to be working well. - I think the authors should compare with the methods which tries to use the soft-argmin, which to me, is surely a potential solution if tuned carefully, despite the gradient will eventually collapse, but gradual temperature annealing should still guarantee the model to be trainable.""",3,1 midl20_92_3,"""In this paper, the authors tackle the problem of non-differentiability of functions as they propagate across a neural network. Here, they propose an approximate neural network model that generates synthetic gradients for backpropagation across a non-differentiable module. They apply this idea to the segmentation of left ventricles from short axis MRI. The idea for approximating derivaties in a neural network by sub-gradients of weakly differentiable functions is clever. The paper evaluates the performance of EDPCNN versus U-Net on a modified ACDC datatset and the LVQuan2018 dataset, both publicly available. The authors results show superior results from the combination of convolutional neural networks and dynamic programming achieve a significantly better segmentation accuracy than a CNN based approach. Another strength of the paper is that the idea of using dynamic programming provides superior performance for small datasets. The idea of the generation of the warped map is not described adequately in the paper. The interpolation operator is not described in detail. This is an issue of terminology. From the acronym of the method, it seems that the convolution neural network and dynamic programming are tightly integrated. However that is not the case. The paper makes a good contribution to the field. Because they combine the step of DP at the end with a CNN, they see a boost in performance for small datasets. However it is interesting that the neural network based methods catch up quickly as the dataset grows bigger. """,4,1 midl20_92_4,"""In this paper, the authors propose a method to segment the left ventricle on MR images using the ""star pattern"" method. As this method is not differentiable, the authors propose to replace it with a differentiable approximating function, so that the whole model can be trained using a gradient-based approach. The training is then made end-to-end by formulating the problem as a bilevel optimization problem where the approximating function is learned in the inner loop and the task loss is optimized in the outer loop. The experimental results show that this approach yields better results than training a simple U-Net. The paper is well written , easy to read and well structured. The problematic is clear and the proposed solution is well presented and justified. Replacing a non differentiable function by a differentiable approximation to allow a gradient-based training is interesting. One major problem I see is the novelty of the proposed method. Estimating a black-box function (in this case a non-differentiable function) by a neural network and learning it online at the same time as the main task was already proposed, for example in: Jacovi et al. Neural network gradient-based learning of black-box function interfaces. ICLR, 2019 Could you please clarify your contributions? One issue of learning this estimator function in a bilevel optimization setup is the overhead added to the training of the main task. We see for example in Table 1 that the overall training time is increased by 60% by adding the learning of the approximation function. Why not use a simple estimator like the straight-through-estimator like in: A. van den Oord, O. Vinyals, et al. Neural discrete representation learning. NeurIPS, 2017. to handle the argmin and learn discrete variables? This could avoid the need for an extra loop to learn the approximation function and make the training faster. The postprocessing described in Appendix D should appear more clearly in the end-to-end pipeline as it contributes to the complexity added by the proposed method. In the experimental part, could you please clarify the U-Net + DP case, and more particularly the output of U-Net? From my understanding, a standard stand-alone U-Net would take an image as input and output a segmentation mask. So the goal of adding a DP module to a pretrained U-Net is not clear. Is it to refine the predicted segmentation? Due to this uncertainty, I have some doubts about the results in low data regime and think that the proposed method is performing better than a standard U-Net because of the prior knowledge provided when giving the center of the star to the model. The motivation and the proposed solution are clear and well justified. However, there are some uncertainties about the novelty and the experimental part, which prevents me from giving an acceptance at the moment. This can change according to the answers given in the rebuttal. """,2,1 midl20_93_1,"""The paper proposes embedding ferns as an alternative to convolutional layers for deep learning architectures. It departs from standard random ferns and variants in that it is a drop-in replacement and allows for end-to-end training of architectures. The abstract is well structured and relatively easy to follow, and overall it can be interesting for MIDL. What is gained from moving from a convolutional layer to a fern? Ultimately it looks like the computational complexity of the proposed layer, and the memory footprint is going to be similar to that of a standard convolutional layer. There are #ferns 2^{depth} trainable parameters (plus the fixed ones) where a convolutional layer would have at worst, c_{in} k^2 It is not clear whether there are settings of the depth and number of ferns such that the number of parameters, computational complexity and/or memory usage will be reduced by an order of magnitude at little cost in performance, since even a depth of pseudo-formula leads to similar number of parameters as a 3 kernel. In terms of operations, floating point multiplications from convolutions are replaced by which expands one way or the other to similar floating point multiplication or squaring, plus the pseudo-formula . What is the energy consumption gain attributable to? Also why not report other metrics that relate to computational/memory complexity?""",3,0 midl20_93_2,"""Summary: Authors propose to replace the matrix-multiplication part of a convolutional layer with a differentiable random fern (as defined in zuysal et al. IEEE TPAMI 2009). It is shown that this method reduces by two the number of parameters, with respect to a standard CNN, and preserves almost the same performance in terms of classification accuracy. Remarks: 1- the paper is quite difficult to read and understand. Many concepts such as ferns, IM2COL, UFM and EmbeddingBag-Layer are not sufficiently explained in the paper. 2- A fern, as introduced by zuysal and colleagues, is a small set of binary tests that is used with a Semi-Naive Bayesian approach in a problem of classification. It's not clear how exactly ferns can replace matrix-multiplication. Authors should better explain this point. 3- Many choices are not well motivated or explained such as: the use of tanh, the offset s^k, the definition of w_u^k 4- Results seem interesting and it's a pity that the paper is not so clear. Authors probably need more space to better explain their algorithm and all related concepts. """,2,0 midl20_93_3,"""The main contribution of the paper is a novel approach of taking advantage of ferns and achieving the (almost) same performance of a Vanilla Net at TUPAC challenge with only half of network size and 1/60th of energy consumption. Removal of floating point multiplications are attributed to using a Look-Up Table that holds the trainable weights, whose indices come from binary string comparison between feature vectors, hence introducing non-linearity to the system. Overall, the paper is concise and benefits great from the fact that the authors implemented their novel approach on TUPAC challenge, which is a publicly known/studied dataset contest. The authors results are tested on a public challenge, and their findings on the public challenge validate their methodological improvements. The authors reduce the net size substantially by removing multiplications with a Look-Up Table and improves the accuracy by learning the feature embeddings instead of using histogram. Their approach allows their implementation to achieve the (almost) same accuracy as Vanilla Net and outperform the XNOR net while having a substantially small network/parameter size. Some implementation details were given without explanation. This could be attributed to the fact that the paper was submitted for a short paper track and did only have a 3-page limit. Nevertheless, the authors don't explain why they chose tanh for the threshold subtraction. They also don't explain the reasoning behind choosing ferns 3 for the depth of the ferns they use at every layer. Spatial convolutions are integrated thanks to IM2COL operator. The authors might be valuable to explain the benefits that they got from that operator more to help the reader with her understanding. The LUT size is #ferns * m. This LUT becomes a hash table where the lookup is in constant time, but the storage is still needed. The paper was unclear whether they included the size of the look-up table in their parameter calculation in Table 1 since building that LUT will still require space. One still might say choosing 24 ferns at every layer with a depth 3 will not consume substantial space for the generation of the lookup table, but greater number of layers and ferns might make this approach converge to the vanilla net in terms of size if one wants to scale it up. Also is there a particular reason this method is used or benefits medical machine learning? """,3,0 midl20_93_4,"""This paper relies on Fig. 1 to convey most parts of the important ideas. Unfortunately, the figure doesn't show all the key components clearly. the main innovation seems to be replacing a multiplication operation with a lookup table + trainable weighted sums. Since convolution can be implemented as im2col followed by a matrix multiplication, I feel it's more appropriate to claim it's a fast convolution by caching some binary encoded results. It's not clear how much memory it takes and how the energy consumption calculated in terms of both memory access and arithmetic operations. The paper also claims ""without using floating-point multiplications"", but there're floating number weighted sums as shown in Fig. 1.""",1,0 midl20_94_1,"""This paper addresses the problem of parallel imaging reconstruction with unknown coil sensitivity profiles. The novel component of the approach is to represent the coil sensitivities in a spherical function basis. There are multiple concerns with this paper: 1. The paper never defines what a spherical function basis is, and there are no references about it. 2. There are many methods that can be used to solve the parallel imaging problems besides [11,9]. References [11,9] are both more than 10 years old at this point, and the paper doesn't cite any of the alternative and more recent methods like JSENSE, SPIRiT, ESPIRiT, P-LORAKS, ENLIVE, etc. 3. The paper spends a lot of time describing ADMM in section 2, but ADMM is a standard algorithm and their are no particularly creative ideas in this section. It would be better to remove this section to leave space to explain the novel contribution more clearly. 4. The results are based on a very artificial simulation and there are no meaningful comparisons against other methods. There is no excuse for this, it is easy to find real data these days and code is available for state-of-the-art methods. 5. This is a bilinear optimization that inherently does not have a unique solution. It seems meaningless to compute PSNR and SSIM without accounting for this fundamental ambiguity in some way. There is no discussion of this, so it appears that the numerical evaluation is fundamentally flawed.""",1,0 midl20_94_2,"""This paper proposed an optimization framework for parallel MRI assuming that the coil sensitivities are represented by spherical basis functions. The method is formulated as a sparsity-regularized energy function and solved by a linearized ADMM algorithm. Overall, the idea is reasonable and paper is overall clearly written. I have the following concerns. (1) This is a conference of medical imaging with deep learning. However, this paper is based on a traditional energy model that is not a DL-based method. As far as we know, in the recent trend of MRI, the ADMM has been merged into DL community based on deep unfolding method. (2) If we do not care about whether this is a DL-based paper, I have the following questions on this research. First, multi-coil parallel imaging has been formulated as energy minimization problem, the major novelty compared with previous work should be clarified. Second, the results are reported on phantom, and not compared with previous work on real images. (3) Due to the space limit as an short paper, the merits of this proposed method are not clearly shown. I suggest to further investigate along this research, e.g., including more convincing novelties, more comparisons, experiments on real images, etc.""",2,0 midl20_94_3,"""The coil sensitive functions are modeled by linear combinations of spherical function basis. The model coefficients were estimated using an ADMM algorithm. But the underlying motivation is not justified. The notation pseudo-formula was not explained. T The provided images are too small and low-resolution to show any useful details. """,3,0 midl20_94_4,"""[A: Overall comments] The paper presents a sound method to improve the stability of parallel MRI reconstruction that takes the coil sensitivities into account. It formulates a reconstruction problem in which the signals are expanded in a particular basis (spherical harmonics?Not specified..) whose coefficients are regularized via a L1 penalty. I dont think regularization via L1 losses is not particularly novel, but the integration into the optimization method ADMM perhaps is. Either way, the paper is perhaps a bit too concise: it is not self-contained and relies a lot on references and prior knowledge of the reader and not all symbols are defined. In addition, I have my doubts on whether the paper fits the scope of the conference: MIDL has a broad scope, including all areas of medical image analysis and computer-assisted intervention, where deep learning is a key element.. The paper does not have a (deep) learning component. [B: Recommendation] Strong reject, mainly because it is out of scope of the conference. The paper could otherwise significantly improve from additional clarification to make it more self-contained. [C: Some detailed comments] Second equation. How do c and a relate. They seem to be used interchangeably, c is a function expanded in the basis with a a vector of coefficients? What is G after equation 2? It says it is given in Section 2, but this is not true. What is the preconditioned ADMM, please give a reference, or explain. Too much prior knowledge is expected here. Weird . Placement above (3a) After Eq (3). What do the vectors v and q represent? I dont understand where they come from and cant make sense of their dimensions. Details are missing. Reconstruction with (1) vs (2) is compared, however (1) is the forward model (which provides the k-space, not the image) and (2) is a minimization problem (which returns a scalar, not image). So I dont understand how two compare reconstruction via (1) vs (2). My guess it is meant to compare with and without regularization, but this is unclear and in the very least sloppy.""",1,0 midl20_95_1,"""Starting with a set of segmentation maps for Brain MRIs, a generative model is used to create a training dataset with simulated images of widely varying contrasts between tissue types. A segmentation CNN trained on this dataset performs well on real images acquired using different MRI protocols and from different datasets / imaging centers. In my opinion, this approach seems to essentially solve the cross-scanner / cross-protocol robustness problem in CNN-based Brain MRI segmentation. * The paper tackles an important problem - that of lack of robustness of CNN-based image segmentation methods to scanner / protocol changes and proposes a simple, but effective solution for the same. * The method can be seen as extensive data augmentation, except that it augments a dataset with no real MR images to begin with - all the training images are simulated. I think this is a neat solution to circumvent reliance on training images of a particular contrast. * By including images with and without skull stripping, as well as by modeling the bias fields, the method ensures that test images can be segmented without any pre-processing. Although removing bias fields with other tools is relatively easy, it is nevertheless nice to have a segmentation tool that can work directly with the acquired images. * The paper is extremely well-written and a pleasure to read! There are no major weaknesses in the paper. Here are a few minor suggestions: * Some details regarding the training procedure and CNN architecture would be helpful: - Does it super long for training the CNN, given the wide ranges of different hyper-parameters and therefore, the relatively large size of the effective training dataset. Although not relevant for practical use, I think this information might be interesting for readers. - I suppose the architecture does not contain any batch normalization layers. It would be better to clarify this, as BN layers are typically present in segmentation CNNs. - Do the authors observe any training instability due to the high variance in simulated contrasts? * While extremely elegant, I think the method would be restricted to cases where the intensities within each segmentation label can be modeled as samples of a Gaussian distribution. For instance, suppose we are only interested in segmenting a structure in a particular region of the image and have labels for this, but the rest of image is just labelled as background although it might contain several other structures. In this situation, the proposed method to start with segmentation labels and generate training images would not work. For completeness, it would be nice if the authors can point this out in the discussion. I believe that the paper tackles an important problem in Brain MRI segmentation and proposes a novel, elegant and effective solution for the same. This will certainly be of great interest to the community.""",4,1 midl20_95_2,"""The paper describes a deep learning strategy to allow for segmentation of brain MR scans regardless of the actual contrast of the MR datasets. Synthetic, simulated data was used for initial training. Evaluation was performed on in-vivo data of more than 1000 subjects. Results show that the proposed approach performs better than previous methods like the classical Bayesian segmentation. - well structured paper, easy to read. - paper targets a very relevant topic. Current work on MR image segmentation often works on one contrast the algorithm was trained on, but fails on new/other contrasts. Therefore, contrast-agnostic segmentation is a very important topic. - the use of in-vivo data. The work evaluates on real world data and not only on simulated data. - limited range of contrasts. The contrasts used (T1, T2, PD) are relatively close in appearance in healthy brains (in some aspects T1 and T2 show inverted intensities). It would have been clinical more interesting to include other contrast like DIR or diffusion-weighted imaging) - It would be helpful to provide a table with contrast-relevant scan parameters for all data sources. - no information is provided on the actual spatial resolution of the datasets. Are they identical? Has the data be reformatted to yield the same nominal resolution. This is important to be able to compare the results! - no groundtruth (manual segmentation) available for by far the largest part of the in-vivo data. This is relevant work and the results are encouraging. There is some missing data and information. In addition, the impact of the described work should be improved by providing more detailed and objective analyses. """,3,1 midl20_95_3,"""In this paper ""A learning strategy for contrast-agnostic MRI segmentation"", the authors tackle the problem of image segmentation on MRI data of varying contrast. They show that using a generative model (deformation, bias field, intensity model with randomized parameters) leads to a training approach that creates networks that are not sensitive to the effects over which the network is trained. The method is well motivated both from a practical perspective and from a theoretical one. The technical description is well considered and clearly presented. The literature is summarized well. On the target metrics, the method is highly effective. The biggest weakness of this approach is that it is based off a core network that compares against baselines that are under trained relative to current state of the art. For example, a careful comparison of modality specific T1 tools would be very helpful. Considerations of variance / statistical tests were not performed even though the data were well visualized for variability. This is an interesting well written paper. The method is articulated clearly in terms of both motivation and technical details. The details are sufficient such that others could reproduce this work. Two areas of improvement would be to ensure that a robust modern baseline is included and include a statistical assessment. """,3,1 midl20_95_4,"""The authors propose a training scheme to make CNNs for segmentation agnostic to MR contrasts unseen during training. A generative model creates synthetic MR images conditioned on label maps. These generated images together with the label maps are then used to train a segmentation network. This network is able to segment MR images of contrast never seen. The approach could be of high significance, for example, when training examples are scarce, when robustness to multi-centre data is required, or when new MR contrasts are introduced. - The paper is well structured and written, making it understandable and easy to follow - Strong motivation and introduction of existing literature - Well formulated methods section with algorithm and code for reproducibility - Promising results - Extensive evaluation: Two baselines and an ablation of the proposed AnyNET serve as comparison - Four different datasets were considered with label maps consisting of multiple ROIs There are two very strong claims, which need to be weakened in my opinion: - for the first time: how about to the best of our knowledge for the first time - not biased towards any: any is a very strong statement, and hard to prove (if not impossible). How about removing the any? Same in the last paragraph of the introduction in italic. I wonder what the influence of the number of labels in the label map pseudo-formula is. This is not discussed in the paper. Also, could it be that the algorithm is heavily location-dependent by simply learning the position of the different brain structures (i.e., learning an atlas) instead of being contrast-agnostic? Did the author, for instance, try to translate the testing images (beyond the pseudo-formula 20 px used during the image generation). Another experiment supporting your claims would be to use e.g. brain tumor images from the BraTS dataset (e.g., T1c image). If it would be truly contrast-agnostic, the contrast differences introduced by the tumor should not affect the segmentation very strongly (very strong is, of course, subjective by visual inspection since ground truth is not available and probably difficult to obtain with existing methods for the used ROIs). I think the paper makes a great contribution to MIDL 2020 and is of high relevance for the segmentation community. Few adaptions (see questions to address in the rebuttal) might convince me to increase the rating to strong accept.""",3,1 midl20_96_1,"""An evaluation of several methods for detecting medical images that are 'out of distribution' is presented on multiple medical image datasets. This is an important issue for deployable diagnostic medical image analysis systems. Experiments include three different categories of out-of-distribution data, and three different categories of detection method. A main strength of the paper is the range of methods and datasets used, and the use of different categories (use cases) of out-of-distribution data. The conclusions clearly point to a need for better methods for detecting images from parts of the domain distribution that are not represented in the training data due to selection bias (e.g. rare diseases). The use of AUPRC (area under precision-recall curve) and its interpretation do not always seem justified: the phrase ""accuracy that's much lower than AUPRC"" seems to imply they are directly comparable; the tasks will only be concerned with a part of the PR curve not all of it, so AUPRC is perhaps not the best choice of metric here. The authors make a serious attempt to evaluate out-of-distribution methods across a range of medical image scenarios. The results provide useful empirical evidence to inform researchers about the likely relative performance of these methods under different circumstances, as well as pointing to the current limitations of such methods. """,3,1 midl20_96_2,"""The paper aims to report on a benchmark of Out-of-Distribution (OoD) detection methods. Even though this is done with an extensive list of datasets and methods, i find the writing quite poor and hard to follow. It reads like the authors got lost in the mix. At least, that is what happened to me. After all, I do find it dull and cannot recommend for acceptance, especially thinking that it does not tell us something new or interesting. After all, the following line makes me think that the current work is just an imitation of earlier work: ""Our findings are almost opposite that of Shafaei et al. (2018), who evaluate on natural images which is a different domain, despite using the same code for overlapping methods."" A good ultimate goal to help medical image analysis community. The benchmark involves many methods to compare. A taxonomy of OoD usecases is also given and the authors play with many datasets to design usecases and OoD detection tasks. In opinion, the writing of the paper is its biggest weakness. It is hard to follow, which also makes it much harder to understand the results from such a large scale benchmark. Even though the motivation is good, the study fails in clearly reporting its results and merits. I found it difficult to properly evaluate this paper, due to the aforementioned reasons. I could not recommend a paper for inclusion, since I could not properly understand and judge it. My only suggestion to the authors would be to substantially revise the paper so that they can communicate their work more effectively. """,1,1 midl20_96_3,"""The authors explore various methods of detecting out-of-distribution (OOD) samples using several different datasets with varying properties. They categorise types of OOD samples (type 1: data from a wrong domain, type 2: data from the right domain but e.g. poor quality, type 3: data from the right domain but with previously unencountered abnormality). They also categorise the various methods from literature of detecting them (data-only, classifier-only, with auxiliary-classifier). They experiment with chest x-ray, fundus and pathology images and a large number of OOD detection methods. The idea to compare different methods of OOD detection is good and interesting for the community. The authors made a reasonable effort to include a cross section of data and methods and to analyse the results appropriately. An interesting finding is that methods using auxiliary classifiers do not perform better. As per the subsequent section, the paper is not well structured and is difficult to follow in its current format. The text requires improvement in clarity, the figures are not well explained and the details of the actual methods implemented are entirely unclear. Aside from this the results are of mild interest, but since many of the methods perform equally well it is difficult to draw very strong conclusions. The paper as a whole is difficult to follow and poorly structured. Important details of the implementations are absent and figures are not easily interpreted or well captioned. It is difficult to draw very strong conclusions or recommendations from the results.""",2,1 midl20_96_4,"""The paper benchmarks OoD detection methods in three separate and commonly used domains of medical imaging. To this end, the paper considers three distinct OoD task categories and three classes of OoD methods. OoD system is very much essential in the deployment of medical imaging analysis tools in the real world, and this paper tries to establish the benchmark as a systematic evaluation of OoD detection methods for medical imaging application remains absent. 1. This paper provides a systematic evaluation of OoD detection methods for medical imaging applications. OoD detection is crucial as the current deep learning systems, owing to their over-confident nature, limits the safe deployment in real-world medical settings. 2. The paper is evaluated on accessible medical imaging datasets (e.g., Chest X-ray), and the experiments are carefully laid out. One of the most significant weaknesses of the paper, in my opinion, is the selection of methods for OoD detection. It has been largely known that discriminative networks are over-confident in their prediction, which is why there has been a lot of interest in learning deep generative model-based approaches for OoD detection. Although generative models like VAE have been considered in this work, the lack of analysis of discriminative vs. generative models ignores the recent progress made in the research around OoD benchmarking. It seems the authors cited one such paper [1] but fail to discuss it appropriately in their work. [1] Do Deep Generative Models Know What They Don't Know? Nalisnick et al., ICLR, 2019. The paper is a useful contribution to the medical imaging community. But at the same time, it lacks discussion/experiments on some of the recent developments around OoD benchmarking in the general machine learning domain.""",3,1 midl20_97_1,"""VEry small paper. Method not novel. Miss a lot of details to evaluate results. Single-Stage vs. Multi-Stage Machine Learning Algorithms for Prostate Segmentation in Magnetic Resonance ImagesSingle-Stage vs. Multi-Stage Machine Learning Algorithms for Prostate Segmentation in Magnetic Resonance ImagesSingle-Stage vs. Multi-Stage Machine Learning Algorithms for Prostate Segmentation in Magnetic Resonance ImagesSingle-Stage vs. Multi-Stage Machine Learning Algorithms for Prostate Segmentation in Magnetic Resonance ImagesSingle-Stage vs. Multi-Stage Machine Learning Algorithms for Prostate Segmentation in Magnetic Resonance ImagesSingle-Stage vs. Multi-Stage Machine Learning Algorithms for Prostate Segmentation in Magnetic Resonance Images""",2,0 midl20_97_2,"""This paper presents a pipeline to perform automatic prostate segmentation in MRI. The main assumption is that segmentation performance would benefit from a separate processing of prostate MR images that contain seminal vesicles from those that do not. The proposed architecture consists of a first classification network that is trained to separate images with or without seminal vesicles. Each class of images is then processed through a separate UNet network. This architecture is compared to a standard UNet architecture trained on both types of images. This paper suffers from several flaws that should be addressed. Regarding the methodological part: -The architecture of the classification network should be provided. -Description of the MRI dataset is critically missing, including the MRI sequence parameters, scanner, acquisition parameters. A reference to the Artemis database should be provided. -Regarding the evaluation, from what I understand, the authors adopted a resubstitution method (ie train and test on the same dataset) : all images were used for the training and testing of the network. The text should be clarified if I misunderstood. Else, evaluation should be performed in a cross-validation or hold-out scenario to avoid producing optimistically biased results. -Regarding the quantitative results, it is not clear if the reported accuracies and DSC for the multi-stage model were estimated from images passed through the segmentation UNet after the classification step or not. If yes, then this means that these values reflect (ie account for) the imperfect accuracy of the classification model (0.8828) which can thus erroneously direct images with vesicles in the no-vesicle UNet and vice versa. If not, then the reported performance only evaluate segmentation performance of each type of images (with or without vesicles). In this latter situation, the authors should perform the whole evaluation accounting for classification step and following a cross-validation strategy as suggested above. -Quantitative performance by the standard UNet model trained on both types of images are similar to that reported by the two-stage model, thus suggesting that the two-stage model may not be competitive, since it requires to train three deep models instead of one. Please comment. """,1,0 midl20_97_3,"""Multi-stage training (with detection network as the first stage) is not new, for example, those with mask RCNN. Compared with a single-stage approach: with significantly higher costs including additional computation resources (two stages) and manual efforts (labelling each slice w/o seminal vesicles), the overall performance gain seems to be marginal (0.9105, 0.9035 vs 0.9063 in Dice score).""",1,0 midl20_98_1,"""This work presents a 4D spatio-temporal deep learning for end-to-end motion forecasting and estimation using a stream of OCT volumes. The proposed method is validated on OCT 3D sequences. Compared to the alternative 3D CNN strategies, this work shows 4D CNN achieve better motion estimation and forecasting results. The motivation is clear. It is easy to follow the paper. Some implementation details are missing (eg. parameter settings). Overall, this short paper shows some promising results of using 4D CNN in motion analysis. """,3,0 midl20_98_2,"""The authors present and evaluate five different methods for estimation of a motion vector from a series of 3D OCT volumes. The application is interesting and the proposed network architectures are intuitive and seem appropriate for the problem at hand. The use of 4D convolutions has not been explored extensively, and it is nice to see an application for them. Some minor remarks: - Please include the resolution (size in voxels as well as mm) of the input volumes. - What do the 12 outputs of the network represent exactly? If I understand correctly, the ouputs are 3D motion vectors (3 numbers), times 3 time points s_{t4}, s_{t5}, s_{t6} Would this not make 9 outputs? - In Figure one, it seems that one of the outputs is s_{t_0} Should this not be s_{t_n}$? - Data has been generated using smooth curved trajectories. It would be interesting to know how these were generated, and whether this resembles real data. - It took me a while to relate the 83Hz in the text to the 12ms in the Table, maybe make this more explicit.""",4,0 midl20_98_3,"""The authors compare five different approaches for motion estimation and forecasting in a chicken breast sample OCT experiment. The main result is that taking into account the temporal nature of the data (eg. by 4D-convolution and especially using appropriate spatial preprocessing of the data) allows for improved motion estimation and prediction. From the methodical perspective, the authors highlight the applied *4D* deep learning approaches, which indeed are still not commonly applied in the medical imaging domain. At this, it should be noted that their application is also not novel per se. However, the proposed joined spatial preprocessing of the individual OCT frames (inspired by the underlying publication Gessert et al, 2019) is an interesting idea. In general, the manuscript is well structured and written. As ""Cons"" points: central aspects that are necessary to interpret the results are not given (at least not in the manuscript; some are listed in the underlying paper by Gessert et al, 2019): temporal resolution, image resolution, motion span and velocity range of the trajectory. Furthermore, which are the actual OCT applications that are addressed, what are typical motion patterns and ranges and what are corresponding application-driven requirements in terms of eg. MAE? In summary: The contribution offers a nice comparison study of 3D vs. 4D approaches for deep learning-based motion forecasting. Due to missing information, it is, however, hard to interpret the results besides the obvious gain in motion estimation and prediction accuracy.""",3,0 midl20_98_4,"""This paper evaluates 5 different models for motion tracking in 4D OCT. The models are variants of that proposed in Gessert et al (2019), which is here extended in different ways to perform motion forecasting/prediction using a sequence of OCT volumes, rather than motion estimation between 2 OCT volumes. On the positive side, the extension of the Gessert model to motion forecasting seems like a useful one. The methods employed seem reasonable and quantitative evaluation is performed to compare them. The discussion of the results reveals findings that may well be of interest to others. However, one weakness of the paper was that the details of the experimental setup for data generation were not clear without following up the Gessert et al (2019) reference. Was the setup the same as in Gessert et al (2019), i.e. with a robot moving the object and mirrors moving the OCT FOV? Please modify the paper to make this clear. Also, can the authors comment on what the accuracy requirement is for motion tracking in OCT? Other specific suggestions: Section 2: region of interest (ROI) performing motions does not make sense to me. Maybe get rid of performing motions? Section 2: In description of n-Path-CNN3D, extent should be extend Section 2, Dataset: For data generation, we consider various smooth curved trajectories with different motion magnitudes this is a bit vague, can you provide more information? How were these trajectories formed? How big were the ROIs? Section 3: combing should be combining """,3,0 midl20_99_1,"""The paper compares different state-of-the-art approaches for visual interpretability in 2D chest X-ray classification. The comparison was made based on their localization capabilities, robustness to model parameter, label randomization, and repeatability/reproducibility with model architectures. And the abnormality localization is evaluated with the Dice's score. 1. The paper is well-written and well-organized. 2. The submission relates to the application of deep learning in the field of chest X-ray classification, which is highly relevant to the MIDL audience. 3. The proposed method is technically sound. 4. Experimental results support the claim made in the paper. """,3,0 midl20_99_2,"""CNN interpretability methods are used more and more in medical image analysis. The authors present a thourough evaluation of several of these methods (localisation capabilities, robustness to model parameter and label randomisation, repeatability and reproducibility with model architectures) extending the work first proposed by Adebayo et al.. This work is very interesting and was most needed. """,4,0 midl20_99_3,"""This paper concerns the assessment of saliency map validity. It was shown that GradCAM is superior to other methods in terms of model and parameter randomization. This is a useful results, as the interpretability that saliency mapping enables is becoming more and more important to help visualize why deep networks are making their decisions. However, there was a lack of discussion of these results in this paper - are there any possible explanations for why GradCAM is performing better? Furthermore, the images in the figures hard to see. They should be larger and as much whitespace should be removed between images.""",3,0 midl20_100_1,"""- The authors propose an architecture for segmentation that combines several components that improve over U-net, Segnet and FractalNet baselines. Notable components include squeeze-excite (SE) blocks, residual units, and a 2.5D convolution model. - The paper is clearly organised, although some important details about the data and architecture are missing. - Unanswered questions regarding data: What size are the images? For inter-observer variability, what experience did the second annotator have? - Questions regarding the model: How many layers and filters are there in the model? How many spatial scales/downsampling layers are used? Do you use skip connections? What are the 'specialized convolutions', and how do they improve performance? - Results on the test data compared to the inter- and intra-observer variability are quite promising. However, no illustrations are shown for qualitative performance and comparison with baseline models, particularly in regions of scar where the authors state models typically struggle. """,2,0 midl20_100_2,"""This short paper describes a network architecture for segmentation of the left ventricle in short-axis contrast-enhanced MRI scans, aiming specifically at late gadolinium enhancement MRI scans. The authors propose to use three 2D networks with late fusion of the predictions (2.5D). The network architecture makes use of a ResNet50 as feature extractor and uses squeeze-and-excitation blocks to somehow combine the features extracted by the individual residual blocks of the ResNet into a single prediction. Unfortunately, the description of the network architecture is rather hard to follow, and there is also no illustration of the architecture - overall, it is difficult to grasp both the overall structure as well as the main novelty of the architecture. The propose network was trained and evaluated with a large dataset (~350 scans) and compared with various other architectures. Inter- and intra-observer variations were also measured by repeating the manual segmentation of the test dataset. The method achieves mediocre Dice scores (overall 82% on average), but the paper demonstrates that this performance is close to the inter- and intra-observer agreement. In summary, the details of the network architecture proposed in this paper are difficult to understand, but even though it is clear that this is still preliminary work that is being presented, the evaluation is quite detailed and the dataset relatively large. Provided that the authors improve the presentation of their method, this could be a good contribution to MIDL. """,3,0 midl20_100_3,"""The manuscript addresses a relevant task but presents some weaknesses. Some methodological choices are not clearly explained: for example, which are the benefits of including squeeze and excitation blocks for the addressed task? The survey of the state of the art can be improved, e.g., by citing and discussing more relevant literature. I strongly suggest the authors to ask a native English speaker to proofread the manuscript. The overall manuscript readability also can be improved. My specific comments can be found hereafter. Abstract - The authors should give more space to the description of the method, shortening the introduction if needed. - The sentence Cardiac left ventricular (LV) segmentation [...] is a necessary step in the processing allowing the identification [...] should be changed in Cardiac left ventricular (LV) segmentation [...] allows the identification and diagnosis of cardiac diseases. - How were the ""best trained proposed models"" chosen? This should be clearly stated. Introduction - The authors write that [...] deep learning-based models have been applied for LGE MRI segmentation in various disease areas. However, only one paper is cited. A more comprehensive literature review should be provided. For example, the authors could cite [Moccia, Sara, et al. ""Development and testing of a deep learning-based strategy for scar segmentation on CMR-LGE images."" Magnetic Resonance Materials in Physics, Biology and Medicine 32.2 (2019): 187-195.] and [Li, Lei, et al. ""Atrial scar quantification via multi-scale CNN in the graph-cuts framework."" Medical Image Analysis 60 (2020): 101595.] - The benefits of (i) including squeeze and excitation blocks and (ii) processing volumes in a 2.5D fashion should be introduced. -Why do the authors refer to 2.5D instead of 3D? Datasets - How was the manual segmentation performed? Did the clinicians make use of any annotation software? If an expert performed the manual annotation, how was the inter-subject variability computed? 2.5 D proposed RSE-Net Model - What do the authors mean by special convolutional module? - I suggest the authors to give more space to the SE module description, considering that introducing this block is the central part of the paper. - How were the three models chosen? This should be clearly explained in this section. - The authors should be more accurate in reporting the learning-rate and batch-size values, as well as the used loss function. Results and Discussion - Why were the DSC and HD computed in their 2D formulation? - How was the correlation value computed? - Table 1 should show also dispersion metrics. - To my knowledge. the scar tissue is rather well contrasted with respect to the myocardial region in CMR-LGE volumes (thats why CMR-LGE is used over standard CMR). A figure to show sample challenging slices, with the obtained segmentation, may help the reader in appreciating more the challenges that have to be tackled. - Please, change ""myocardial architecture"" with ""myocardial anatomy"". - As a general comment, it would be nice to reference publicly available datasets in the field (if any). Testing the proposed methodology on publicly available datasets would promote a fair comparison with the literature. Minor - All acronyms should be defined at their first use (e.g., SE) """,2,0 midl20_100_4,"""The paper proposed a new 2.5 D residual neural network for myocardial segmentation from LGE MRI. The results showed their method achieved similar accuracy to the intra-observer variation, and better than the inter-observer variation. *Strengths None. *Weaknesses 1. Regarding to the methodology in this work, what and how the network is composed of and trained. A graph illustrating the architecture is very useful. 2. In the Sec 2.2, the paper reads, SE module and special convolutional module were used in the network. Why this should be used? What is the rationale or advantage of using it? 3. In the Introduction, only one paper about segmentation of LGE MR images is referred. The authors should be aware that recently there were a LGE MRI segmentation challenge (MS-CMRseg) and cardiac MRI segmentation challenge (ACDC) from MICCAI, and ample literature is available for comparisons. 4. Results showed they achieved Housdorff Distance (HD) of about 3-4 mm in apex and basal slices, which is very contradictory to existing literature. With better Dice scores, MS-CMRseg challenge reported the best myocardium segmentation from over 20 submissions had over 10 mm HD on LGE MRI, and ACDC reported best HD of about 5-10 mm on BSSFP MRI. """,2,0 midl20_101_1,"""The idea is valid. And random projections are one method for stratified bootstrap sampling of datasets. Technically correct bu this paper falls significantly below the threshold of this conference. It is a passable workshop paper. The significance is rather weak. The evaluations reported in the table are the same as in figure. Putting both is redundant. The presentation can be improved a lot. There are meager significant differences of proposal with existing baseline ensembles. """,2,0 midl20_101_2,"""This short paper proposes a two-level ensembling strategy, and evaluates its performance on a dataset of 137 patients with head&neck cancer, for predicting survival (yes/no). As input features, radiomics features and clinical features are combined. Strengths: - The proposed ensembling strategy is compared with several other ensembling methods and with individual classifiers. - The results indicate competive performance of the proposed method. Weaknesses: - Unfortunately, the description of the method is too short and vague to assess its theoretical validity, novelty, and relation to existing methods. It is also unclear whether any hyperparameters are involved in this new method, and how they were tuned. The results are encouraging, but the lack of clarity about the method is a major concern. Details: - There seems to be a corrupt equation at the end of Section II. - Fig 2: what is ""AML""?""",1,0 midl20_101_3,"""Authors proposed a hierarchical fusion framework to integrate random classifiers. It is an interesting work and could have clinical impacts. Experiments show a large improvements from each individual classifier. Major concerns: - Methods section is too short for readers to really understand HFRPC. It is necessary to summarize the two-level using equations. How to choose weights for each base classifier ? Also, Eq 1 is not clear due to the format issue. - Experiment section lacks lot of details. what value you are going to predict in survival prediction ? Did you predict if the patient can survival more than 2 years ? Also, how many radiomics features are extracted and did you use feature selection ? In survival prediction, it usually reports C-index or AUC if the target survival year is specified. - Technical novelty is limited.""",2,0 midl20_101_4,"""- Introduction is a summary of ensemble learning, rather than what the problem is with existing methods that needs to be solved. This means the solution is not well motivated or clear as to the reasoning. - Weird formatting issues in the paper, unclear notation. - Not clear what the different ensemble methods are - Clinical problem and dataset not described, not clear how classes were defined.""",1,0 midl20_102_1,"""The methodology introduced within the paper is clear, simple and concise. The authors propose combining N=4 sine wave frequencies to apply a continuous transformation function that modifies the gray values of the training dataset. The transformation preserves edge gray-value information and is effective in assisting vertebral body segmentation. Cons: The authors fail to acknowledge or mention whether this transformation is applicable to other modalities (for example: T1-weighted MRI input data, or CT to MRI detection) and other less-bony structures. Potential impact: Preliminary results demonstrate that the transformation enhances and preserves, the vertebral body edges that leads to good cross-modality segmentation. I think the introduced transformation has potential use and impact for generalizing datasets during training where structures being segmented have edges to guide localization/segementation. It would be interesting to see whether the introduced transformation can assist with cross-modality whole-vertebra and intervertebral disc segmentation, where there is low bone/tissue image intensity contrast. """,2,0 midl20_102_2,"""The authors propose a method to train gray-value-invariant networks by applying random gray value transformations to the images during training. The paper proposes an intensity transformation using sine functions with random parameters. The method is evaluated on an MR-to-CT lumbar segmentation problem, showing that it improved cross-modality learning. Strengths: The proposed intensity transformation is simple and easy to apply. Based on the (limited) experiments in the paper, the method does seem to improve the cross-domain classification in this case. It is also useful to see that the augmentation did not hurt the performance in the original MRI domain. Weaknesses: There is no comparison with alternative methods. The paper's assertion that having smooth gray-value transformations is better than having non-smooth transformations is difficult to evaluate, because there are no results for other ways to train gray-value-invariant methods. The experiments are limited in other ways as well: the dataset is small and there is no information about the network or how this was trained or evaluated. (And there is still one half page left where this could have been explained.) The results on page 3 would have looked better in a table.""",2,0 midl20_102_3,"""It is refreshing to read a paper claiming that all that you need for cross-modality learning is good data augmentation. However, modality transfer should be approached cautiously since different image modalities of the same anatomical sites are used to capture complementary information. Thus, modality transfer should only be used when aiming for information available in both modalities. In the case of vertebrae, cortical bone clearly visible as the brightest structure in the CT images can be easily mixed with cartilage surrounding vertebrae that can be seen in MR images as dark structure surrounding the vertebral body. This is probably the reason why there is a clear under-sampling segmentation in CT (check the Fig. 2). Moreover, the proposed augmentation looks too simplified to be used in any other scenarios except the one proposed in the manuscript. Lumbar vertebrae are well-separated bones in CT that are not difficult to segment also with a clever chose of a threshold-based method. I doubt that going for thoracic vertebrae or any other more demanding anatomy the proposed approach would give a decent result. Moreover, compare to modality transfer from MR to CT, the opposite direction from CT to MR is a way more challenging, but also more realistic scenario, since having a bone labels in CT images are more commonly available. It is also not clear why authors used a sum of sines to create non-linear transformation function and not e.g. a polynomial function of the third degree. """,1,0 midl20_102_4,"""The paper proposes a gray value augmentation on data, to make the networks invariant to image intensities. The problem statement is clearly described. However, some important details about the segmentation network, training scheme and labels are missing. Strength: The proposed idea for cross modality segmentation is novel and reported dice scores are fare. Weakness: - The model is validated on a few number of CT images, so the results are not necessarily generalizable. - The choice of segmentation network architecture and also the training scheme is not reported. - It is not clear whether the network is predicting a binary mask for all vertebrae or it is predicting a unique label for each vertebrae. This effects the reported dice scores considerably. - If the segmentation is multi-label (according to Fig. 2), How many labels were used to train the network? There should be 5 (for some cases 6) labels defined for lumbar vertebrae, How does the network handle the thoracic vertebrae visible in the Fig. 2? I think another possible validation could be comparing the histograms of intensities in the training dataset (after augmentation) and target test set. """,2,0 midl20_103_1,"""The authors propose a new approach for combining structural and functional MRI data in GCN analysis. The graph nodes represent subjects, where the nodes incorporate features derived from fMRI, while the edges describe the relationship between subjects are computed based on similarity of structural MRI. The method was tested on the full ABIDE I and II dataset and authors reported general improved performance using their approach over a recent GCN method that defines the graph edges based on population demographics. 1. Paper is fairly well written and easy to follow. 2. The proposed idea of combining structural and functional information through using one mode for edges and the other for node features appears novel, as usually functional information is used to define both nodes and edges, or edges are defined based on population or other nonimaging characteristics. Structural information has been used to define graph edges but in the context of the graph representing the brain network for a single subject. 3. The authors present a fairly detailed comparison to a recent state-of-the-art approach that applied GCN in autism classification, including both intrasite 10-fold cross validation experiments as well as leave-one-site-out experiments. 1. Some important details are missing or confusing. - It is not clear exactly what features are used to compute the edges for the competing p-GCN. - The authors state the best atlas for p-GCN and s-GCN are used for the leave site out evaluation, and claim for s-GCN the best atlas was schaefer 400, but from Fig. 3 HO appears to be the best. 2. Experimental results are incomplete and some conclusions do not appear to be supported based on the presented info. - The experiments only use one measure for classification performance - the accuracy. It would be better if other measures (eg sensitivity and specificity) were also shown and discussed. - In Sec 4.1 in comparing p-GCN and s-GCN, the authors claim s-GCN performs better for 6 out of 9 atlases. Given the range of values, it seems to me that s-GCN may potentially truly do better in the first 5 out of 9 atlases. However, without some sort of significance testing, it is difficult to come to a conclusion, and it seems more like it is better half the time. - In Sec 4.3 for the leave-one-site-out experiments, the authors claim that their approaches s-GCN and ss-GCN perform better than p-GCN in 4 out of 5 site with largest number of subjects. Again, it is hard to say that the numbers are better for some (is 61.1 significantly better than 60.9?). Also authors are only showing results for 5 out of the ~30 sites in the dataset. It would be interesting to compare the leave-one-out results across all these sites to get a better idea of whether the proposed approaches really improve classification performance. The authors propose an interesting new method for defining the edges in a population graph, using the similarity in structural MRI between subjects. This method presents a new way of combining structural and functional data (structural data on edges, functional data on nodes of the graph). However, there are some weakness in the experimental section which make it unclear whether the proposed approach results in superior performance.""",3,1 midl20_103_2,"""In this work, the authors have proposed a GCN framework for autism detection. The novetly in the work involves a) using the structural MR features from a VAE to determine the edge weights. b) Considering the effect of various methods for temporal brain summaries from FMRI to be used in node features I believe that the contributions are novel. The idea of exploring explicit structural information as edge features, instead of high-level proxies, is interesting. The results, especially when considering brain summaries are good. (However, please see weaknesses section, calling for more discussion on this) Based on some of the results, the effectiveness of the primary contribution seems (of adding structural information) to be mixed, and some parts of the paper need to be improved. Please see my comments below: a) In the comparison of p-GCN and s-GCN, in Fig. 3 and Table 2, there are small differences between most cases of p-GCN and s-GCN, and in some cases p-GCN ourperforms s-GCN. Thus, only adding structural information does not yield consistent results. Only when the brain summary is also included as a part of ss-GCN, one can notice a good improvement (in some cases). Thus, I am curious how the performance would be if brain summaries are used in conjunction with p-GCN. b) Also, the improvement with ss-GCN in Table 2, is very different across cases. Please discuss such a large variation in the results with ss-GCN. c) The statement on Page 3, ""Hence, as opposed to defining relations ... expected to have lower variance"", is not clearly elaborated. More specifically, what do the authors imply that the strcutural representations have lower variance. I am assuming that it is the within-class variance that they mean. However, it is not clear. Please elaborate, and justify. d) The use of gamma in equation (1) seems like a function, rather than a weight (as mentioned in the paragraph below the equation), as M_h(i) and M_h(j) seem to be arguments of the functions. Please clarify / correct this. e) In section 3.1, the authors mention that they use a pre-trained VAE. Please specify what data is used for pre-training, and why is relevant for this application. f) Perhaps, the authors can also compare with a couple of other contemporary methods which also obtained similar results. (e.g. Distance Metric Learning using Graph Convolutional Networks: Application to Functional Brain Networks, arXiv:1703.02161) Overall, the work seems to contribute to the progress of the area, considering that the contemporary methods also have similar performances. The direction of the methodology seems novel. Thus, while, there are some concerns about the approach, I believe that it can be shared with the community. """,3,1 midl20_103_3,"""This paper presents a graph neural network-based method for autism classification using structural and functional fMRI. Specifically, each subject is defined as a node of the graph, the sMRI's feature similarity is used for building node connection and the brain summarizes are used to build the node feature. The proposed method is tested on the ABIDE dataset. -The paper is overall clearly written and easy to follow. -The idea of using sMRI feature similarity between subjects to build the edge of the population graph is novel in neuroimaging analysis. -The results are tested on different hyper-parameter choices. - The motivation for using sMRI similarity. Do autism subjects have similar sMRI? Does the scanner affect the measure, as different scanners were used in different sites? It would be good to see the supporting references or statistic analysis and investigating the difference /relationship of the edges between the graphs in p-GCN and the graphs in s-GCN. - Compared with p-GCN, the proposed s-GCN seems not significantly improve classification accuracy. - Not obvious to validate the statement 'In general, the average variance across all atlases is lower for s-GCN' from Fig 3. The paper is built on existing work (Parisot et al. 2018). The extension seems novel. However, the correctness and motivation of using sMRI to construct edges need further justification. The accuracy is not exciting compared to the other existing works. """,3,1 midl20_103_4,"""The authors extend previous work by Parisot et al to include structural MRI similarity or various brain summaries for functional MRI. Structural MRI similarity is captured by a 200-dimensional latent representation from a pretrained VAE, while brain summaries are represented by a 3000-dimensional feature map from a CNN trained on controls vs ASD classification. Structural similarity from low-dim representations is used to compute edge weights for an adjacency matrix between subjects, where vertices are subjects, while low-dim brain summaries are used as vertex feature vectors. A graph convnet then classifies unlabeled vertices (subjects). Performance is compared using various atlases, brain summaries, and removing specific sites. - Using open data throughout (ABIDE I/II), in particular multi-site data, is a great strength of the paper, enabling approximate comparison with previously published work. - Classification accuracy seems slightly higher than in earlier work (e.g. on ABIDE I with fMRI Nielsen et al. Front. Hum. Neurosci 2013, or with sMRI Haar et al. Cerebral cortex 2014, Sabuncu et al. Neuroinform 2015). Multi-site data is difficult to handle and neuropsychiatric disorders are difficult, so this is a good point. - Evaluating performance of functional connectivity-based measures on multiple atlas is very important (Dadi et al. NeuroImage 2019) and is a strong point here. - Likewise, comparing performance with leave-one-site out is a great idea (but see below). The main weakness is the statistical reporting, in particular in terms of performance metrics and classification confounders such as site, class balance, or sex balance. The focus is almost exclusively on comparison why the baseline from Parisot et al, with little regard to clinical features. - The dataset is imbalanced in terms of class. With high enough imbalance this can drive classification purely by virtue of class frequencies. Authors should compute and show diagnosis proportions (in total and at each cross-val fold), and report on the no-information rate each time. Authors should also report at least one metric measuring differential performance between classes, such as F1, Kappa, or both sensitivity and specificity, rather than overall accuracy. - In Tables 1 and 2, no estimate of standard deviation across folds is provided. It is hard to judge if the differences are significant or not. Are these figure computed on the re-assembled confusion matrix after the 10 folds, or averaged from estimates in each fold? Comparison with figure 3 makes this confusing. - Likewise, the male-to-female ratio for autism is around 3:1 (Loomes et al., JAACP, 2017). Depending on data split this alone can drive classification. The authors should report on male-female ratio in their data. A baseline should also be provided with a basic phenotype-based classifier (e.g. random forest with Sex and Age (possibly site)) as input. - For the multi-site study: the distinction between acquisition center and site (appendix B) is not clear and probably muddies the generalization claims. In 4/5 left-out sites, the authors in fact use data from the same acquisition center in the training set. Only Georgetown is cleanly split. At present it is hard to conclude as to generalization ability. - The motivations and process behind important hyperparameter choices (gamma in equation 1 across all methods, VAE latent space dimension, CNN last layer feature map dimension) are not explained and the tuning process is not explained. How do we know this is not done looking at overall results? - It is not clear whether the CNN is retrained at each cross-validation fold from brain summaries Interesting, but complicated and somewhat over-engineered approach for fusing structural MRI and functional connectivity. Nevertheless sensitivity analysis is nice (with caveats) and some improvement is shown over the baseline method.""",3,1 midl20_103_5,"""This is a strong paper which extends recent work on using graph convolutional networks to model phenotypic variation from fMRI. The paper builds from Parisot et al 2017 to build a population graph which uses similarity of latent encodings of structural MRI to create the graph structure. Nodes in the graph are represented from summaries of fMRI data compressed using the 3D CNN. The paper shows variable improvement over Parisot when just the graph structure is modified with sMRI but in some cases the combined and use of sMRI and fMRI encodings are shown to give good improvement for Autism prediction from data collected from different sites. The paper presents a new method for graph convolutional learning from brain MRI, which fully leverages the advantages of graph CNNs to combine sMRI and fMRI. This is important as behavioural and cognitive phenotypes are likely to originate from morphological and dynamic functional sources. It presents a method which is independent from the need to define ROIs from an atlas. This is shown in Figure 3 to lead to very variable performance for the functional connectivity based approach (Parisot 2017) The approach is well motivated and evaluated. Methods are generally clear. Figure 3 isn't wholly convincing as you would imagine that users would always want to select the atlas which corresponds to the best performing model. The two step validation of the changes relative to Parisot et al are not immediately clear. It should be made more obvious that 4.1 refers just the changing the way in which the graph edge structure is learnt and thus both networks are using functional connectivity for nodes at that point. The methods in this paper are novel and well motivated. It's thoroughly validated against a competing method and is shown to offer improvement. The paper shows highly novel combination of sMRI and fMRI for phenotype prediction.""",4,1 midl20_104_1,"""This paper presents a nice application of DL-based segmentation for a common clinical application. It would by nice if the authors made a better effort at making the paper more reproducible for readers. I understand that training parameters and how they were found by the authors may not fit in the length of a short paper. But you could provide a link to your repository, so those interested in the details could read your code, or even try it on their own data.""",3,0 midl20_104_2,"""The paper is about amniotic fluid volume estimation using deep learning. The paper is to clearly written, the approach makes sense, however, the paper is of limited originality, applying a U-net with basically known extensions to 2D fetal ultrasound data. There is a clear motivation for the addressing AFV estimation, however a motivation for the specific approach is missing, especially in how far the presented approach is suited to estimate the real AFV and not only a proxy if it, the AFI which reduces the problem to distance measurements and which is only used because a true manual volume measurement is too time-consuming. The use of 3D ultrasound, readly available, is not discussed, neither the relationship between the segmented AF area (a somewhat better proxy a to the volume) to the AFI. Many other details are missing, starting with the data (Fig 4: what is the reason for the discrepance between ground truth and clinical annotation? Why did the clinician deviate from what has been used as ground truth?). Training setup is not clear: What has been used as loss function? Is it based on the AF segmentation or on the AFI related ground truth line lenght? Meta parameters of the training, e.g. nr. of epochs? Evaluation is incomplete: Since the network ouput is a segmentation (rather then the length measurement done sub-sequently), the reader would expect a related success measure, e.g., DICE. There is a bit too much in the dark to be valid MIDL paper. """,2,0 midl20_104_3,"""This paper describes a framework for estimating amniotic fluid index (AFI) from ultrasound images. The framework consists of an automated deep learning based segmentation followed by an estimation of the AFI (the depth of the segmentation) from the segmentation. The segmentation architecture is based on a U-net but also includes atrous convolutions and multiscale side input/output images (similar to the M-net architecture proposed by Mehta and Sivaswamy, 2017). On the positive side, the paper addresses a useful clinical application, there is some comparative evaluation and the results seem good. However, the paper is let down by a number of weaknesses: 1) There are numerous minor errors in scientific English these are not enough to prevent the reader from understanding the content, but they should be rectified. 2) There is no review of previous attempts at (semi-)automatic estimation of amniotic fluid volume or related parameters. In particular, the following two references should be added and discussed: - Sagiv et al. Application of a semiautomatic boundary detection algorithm for the assessment of amniotic fluid quantity from ultrasound images. Ultrasound Med Biol, 1999. - Li et al, Automatic fetal body and amniotic fluid segmentation from fetal ultrasound images by encoder-decoder network with inner layers, Proc EMBC 2017. 3) The method used for AFI measurement from the segmentation output is not described. 4) A number of points about experimental setup and evaluation are unclear. Specifically: - In Table 1, is U-net a standard U-net or a U-net with atrous convolutions? Also, please add statistical tests to this table to verify whether the improvements for the proposed model(s) over the U-net were significant. - Was the number of model parameters the same/similar for U-net/M-net/AF-net? How were hyperparameters optimised? Was this done separately for each of the 3 models (U-net/M-net/AF-net)? What data were used for hyperparameter optimisation? - Related to the above point - 435 images were used for training and evaluation how were they used, i.e. were they split into train/validation/test? If so, what were the numbers of images in each? 5) There is little discussion of results I know this is only a short paper but there should be at least a brief discussion. The authors could reduce the number of references (especially the more clinical ones) to make space for this. Other specific suggestions: The title indicates that the method estimates amniotic fluid volume, but the paper is actually about estimating AFI, which is a surrogate for volume. I suggest the title is changed to more accurately reflect the content of the paper. Section 1, first word: Please define abbreviation AFV Section 2: Explain what is meant by the AF pocket Section 3.2: Define what is meant by relative error, what are its units? Figure 4: Whats the difference between the yellow and cyan AFI lines? I.e. I understood from Section 3.1 that the expert (clinician?) annotation was used as the ground truth? """,2,0 midl20_104_4,"""The authors present the AF-Net, which is a U-net with three adjustments. The authors show that the AF-Net is more robust compared to the U-Net and M-Net for AFV measurement. Main problem: The authors mention ""the AF are sonographer dependent, and its accuracy depends on the sonographer's experience. This paper aims to solve the above problems by..."", but the authors use 2D ultrasound images made by a sonographer, so the system therefore does not solve these problems. If a sonographer is able to acquire these images, they are also able to perform these measurements. Such a system might speed up this process. Note: the abstract is not included in the PDF. The authors also do not include a Section with a discussion. The boxplot shows that six outliers are resolved by the AF-Net, so it can be debated if that is clinically relevant to reduce (6/435=)1.4% of the errors.""",3,0 midl20_105_1,"""In this paper, the authors investigate the applicability of different Continual Learning methods for domain adaptation in chest X-ray classification. They include joint training (JT), Elastic Weight Consolidation (EWC) and Learning without Forgetting (LWF) in the continual learning analysis on two chest X-ray datasets, ChestX-ray14, MIMIC-CXR. 1) The paper clearly describes several continual learning methods, and how they design the experiments based on different methods. 2) The paper is well organized and the design of experiments is reasonable. The results of the experiments are well explained and analyzed. 1) I am curious about what the results will be if the order of the two tasks are switched that training with MIMIC-CXR first and then adapting to ChestX-ray14. Since MIMIC-CXR has more images than ChestX-ray14, I am wondering if the number of the training samples could have any effect on continual learning. 2) It would be nice if there is some comparison with state-of-the-art methods. The idea and motivation of the paper is interesting. The paper is well organized and presented. The experiments are reasonable and the results are analyzed well. Thus, I suggest an accept or a weak accept for this paper.""",3,1 midl20_105_2,"""The authors present a comparison of methods developed to mitigate the effect of Domain Shift between images acquired with different CXR scanners. All the methods compared have been recently developed and their results are promising, for this reason their application and evaluation in different applications in the field of medical image processing is highly recommended before any further analysis. The authors perform this work for CXR with extensive and well known datasets in an orderly way. It is evident that the methodology followed responds to processes more engineering than purely scientific for the value of this type of work for the community more linked to the field of development of new methodologies is undeniable - The comparison methodology in kind of rigurosos - The methods used for comparison are those most accepted by the community and those that perform best overall. This fact can speed up the work of a large part of the community working on CXR images processing - The employed datasets and processioning steps are widely accepted by the community - Application to clinical problems is straightforward - I miss a slightly more scientific and less engineering approach - Annotations, for MIMIC are obtained employing CheXpert but the possible errors introduced by this have not been taken into account - Only one architecture is tested (DenseNet121). Even when this architecture is widely accepted for this task would be necessary to extend the study to evaluate the model effect. - Since the work consists of comparing methods for the DA problem they could have included more datasets (domains). For CXR there are numerous possibilities The article is well organised and written. It is very useful to compare the different methods of the state of the art and also uses datasets widely used by the community so an analysis like the one made can be very useful for a large number of members of the scientific community in the field of medical image processing. """,3,1 midl20_105_3,"""Continual learning is known to retain the old knowledge base and use it to learn the new coming tasks. However, in this paper, the author adopt it in a different way that help generalization to data from a new domain. The author used the standard EWC and LwF as the CL methods and test on ChestX-ray14 and MIMIC-CXR datasets. The author empirically demonstrates that these methods provide promising options to improve the performance on a target domain and to mitigate effectively catastrophic forgetting for the source domain. The author proposed a very interesting idea for domain adaptation, which is to utilize the continual learning methods to use the old knowledge base to help different domain task learning. The paper is well written and organized. 1. I would suggest the author can clearly state which algorithm is the proposed and which algorithm is the prior art. For example, put JT, EWC and LwF in the Preliminaries and only describe your method in method. 2. EWC and LwF are a little bit out of age, there are many latest works proposed. GEM (2017) [1] Lopez-Paz, D., & Ranzato, M. A. (2017). Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems (pp. 6467-6476). DEN (2018) [2] Yoon, J., Yang, E., Lee, J., & Hwang, S. J. (2017). Lifelong learning with dynamically expandable networks. arXiv preprint arXiv:1708.01547. MWC (2019) [3] Zhang, J., & Wang, Y. (2019, October). Continually Modeling Alzheimers Disease Progression via Deep Multi-order Preserving Weight Consolidation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 850-859). Springer, Cham. And all these methods have released their source code, did the author try their methods on the dataset? Or could the author give more intuition about why to choose EWC and LwF. 3. In EWC, ""Fisher matrix calculated over the ChestX-ray14 training samples"", I wonder whether the author does the ablation study on how many samples should be used for computing the Fisher matrix will yield the best results. 4. I was confused by the experimental setting ""4"" : ""The LWF penalty was only applied to the 7 labels not present in the MIMIC-CXR dataset"", can the author make it more clear, is that means the LWF soft-label only compute on the classes in ChestX-ray14? For the remaining 14 classes on MIMIC, there is no LWF loss applied? This paper proposed a very interesting idea and will become a new strategy for solving domain adaptation on many medical image problem. The paper is well written and organized. I would give a weak accept and will give my final decision based on the author's feedback in rebuttal. """,3,1 midl20_105_4,"""This paper applies some of the current CL methods to a Chest X-ray classification problem with sequential learning on two datasets, involving both domain shift and task shift. The goal is to train a model that performs well on both datasets, including previously seen classes (with potential domain shift) and unseen classes. + The paper is well-written with clear explanations of previously existing CL methods and experiments. + The problem that the authors are trying to solve is practical for potential healthcare applications. - The authors used binarized Fisher information for the EWC approach, although a heuristic explanation is given in the last paragraph of page 4, no references or ablation studies were given in the experiments to justify the use of binary EWC regularization. - Lack of validation on a larger number of tasks/datasets ( 5, 10 tasks), which is typical in CL. I vote for borderline. On one side, the authors demonstrate the forgetting problem caused by domain shift, which can be addressed by current regularization-based methods in CL. However, it would be more convincing if the author can validate on a larger number of tasks/datasets instead of using only two datasets to draw a fair conclusion. That is to say, with fixed model capacity, are parameter regularization based CL methods really suitable for practical usage?""",2,1 midl20_106_1,"""Summary: The authors present a neural network architecture for nuclei segmentation and classification from histology images using an interactive approach. This is a very relevant problem right now as deep learning architectures are highly accurate but at the cost of requiring a large number of annotations. The reported results are good but, I consider that the authors do not provide sufficient details on their method. originality: The authors address a very relevant problem. clarity: The paper misses to provide sufficient information on how it works. Many terms that appear in the equations are not properly introduced. quality: The authors have classified their paper into the category well-validated pape. I tend to disagree. The reported results are good but, their evaluation remains a bit superficial with a single experiment comparing their method to a fully automated approach and a manual one. 1) There are many state of the art semi-automated methods that could have been included in the comparison. 2) I would expect more experiments to understand the role of the different elements composing their framework and how these affect performance (e.g. ablation study, quality of inputs,, etc). Globally, i think the paper needs to be improved before publication. Pros: - The authors address a relevant problem which is how to ease the process of data annotation for deep learning methods which are usually data demanding. - Reported results are good when compared to fully automated methods Cons: - The authors do not position their method w.r.t the state-of-the-art. There are numerous works trying to address this problem. - In the description of the method, there is little information about how the interactivity is achieved. This is only limited to introducing Eq. 2 but no further details are provided. How the second network intervenes in the whole process is also not clear. """,1,0 midl20_106_2,"""- Brief summary of your review. The paper proposes a semi-automated interactive tool based to produce high-quality annotations quickly as manual annotation of anatomical structures and tracing the boundaries. They use MonuSAC histopathology data with four classes of labeled and annotated nuclei. The authors claim that the semi-automatic pipeline is 5x faster for annotation than totally manual processes. - What best describes the contribution of this paper? Please take the paper type into consideration for the rest of your evaluation. For instance, a strong method paper should not be rejected for limited validation. Similarly, a strong validation paper should not be rejected because of lack of methodological novelty. O methodological development O validation/application paper O both validation/application paper - In 3-5 sentences, describe the key ideas, experiments, and their significance. The work borrows multiple ideas from different papers in literature and combines them for the application of semi-automatic segmentation of nuclei datasets. Their network architecture include CAN (context aware networks) for segmentation as well as instance classification. Besides the VGG features, the networks include positive and negative user click maps to enhance the model performance. A relaxed Jaccard Index or IOU loss is used for training the models. The experimental results on MoNuSAC histopathological dataset confirms the superiority of using semi-automatic method. - What are the strengths of the paper? Clearly explain why these aspects of the paper are valuable. The method has an Aggregated Jaccard Index (AJI) comparable with human annotators, and saved about 70% of the annotation time on MoNuSeg dataset. The models achieve a great reduction in the amount of annotated data required for cases of neutrophils and macrophages nuclei type. - What are the weaknesses of the paper? Clearly explain why these aspects of the paper are weak. Please make the comments very concrete based on facts (e.g. list relevant citations if you feel the ideas are not novel) and take the paper type (method or validation paper) into account. In Fig. 2, the authors have not shown the performance results for the automatic segmentation for each individual class. The authors mention that during interactive segmentation, the user provides points inside and outside the object instances of importance, however further details on such points is missing - like is it per class basis or different markers for inside and outside of an object of interest. Would in interesting to conduct and report more experiments on automatic segmentation using Resnet, UNET, MaskRCNN architectures. - What would you like the authors to address in their rebuttal? (Focus on points that might change your mind.) Add performance of state-of-the-art automatic deep learning methods - Justification for rating The proposed methods and improvements are very valuable but needs more experiments and clarifications to be validated. """,3,0 midl20_106_3,"""In this paper, a semi-automative interactive tool is developed for nuclei segmentation and classification. The method is not novel, the authors slightly modified Li et al.'s method, without any explanation or comparison to the original method. It is not clear how the proposed method performs nuclei classification. The authors don't mention any loss function for the classification branch or any manual input about the class label. The good aspect is that the tool will be beneficial to the community if made publicly available.""",2,0 midl20_106_4,"""The paper presented interesting ideas and clear goals. Unfortunately, the description of the method is poor/ not clear. It is not clear how the dataset was split into training/validation/test sets. It is not clear how the method was trained and evaluated. Based on the current description it is not possible to reproduce results or experiments. Authors say that our method was significantly more accurate, however, statistical tests were not conducted or are not presented. The presented work should be re-written/ extended to be useful for the scientific community. """,2,0 midl20_107_1,"""The paper proposed self-learning to pre-train a U-Net model using benign cases, that would not be otherwise used because not containing annotations, to perform a reconstruction task. This U-Net is later used to segment lesions using manual annotations. A public dataset is used for evaluation. The use of self-supervision is interesting here, and shows some improvement whe compared to using full supervision. In the evaluation, masses and calcifications are merged, which may have an impact in the clinical applicability of this system. No comparison with existing models on the same public dataset are reported.""",3,0 midl20_107_2,"""- The general idea of this paper is to design a training process to improve segmentation. - The authors propose to incorporate the unlabelled data with self-supervised learning into the training process. - I think the paper is well-motivated, but too many details are missing, making it not possible to understand. """,2,0 midl20_107_3,"""The authors claim 1/8 screening mammograms contain a malignant lesion. This is obviously false. It is also unclear why segmentation would be an important task in screening. The goal is to select images with suspicious findings, ensuring not too many false negatives yet achieve a good positive predictive value. Many of the lesions one wants to detect are calcified and cannot be trivially segmented. Therefore, it is surprising that as a preprocessing step, the images are rescaled to a resolution of 1536 x 1536. For these calcified lesions, this can remove a lot of structure (and ignoring that these images have all different aspect ratios). While the method might be interesting to show, there are too many flaws (explanation of the method, details of the dataset, details on training and so on) and too little context that it would warrant acceptance.""",1,0 midl20_108_1,"""A well written paper, and a great hypothesis that can be used as a baseline for several application based projects in the medical imaging community. The authors extracted labels from a large amount of cancer imaging dataset from TCIA to train a medical domain 3D deep convolution neural network.Then, they evaluated the effectiveness of their proposed network in transfer learning a liver segmentation task and report that their network achieved superior segmentation performance (DICE=90.0%) compared to training from scratch (DICE=41.8%). * A well written paper with all the information provided in the Supplementary as well. * Good use of diagrams * Good 'n' in the dataset curation * Well laid out hypothesis * Technically sound * Code has been shared for reproducibility via github There a few minor typos in the submitted paper. I suggest the authors to give the paper a thorough read. For somebody with no previous knowledge, it is suggested that the authors spend some time and explain what DICOM attributes mean. For example, if Scanning Sequence is known by (0018,0020)-- is this universal and consistent across all sites? or is this vendor specific? The authors have attempted to establish an equivalent of ImageNet in the radiology space by using multiple TCIA cancer cohorts. Massive datasets have been curated and QCed for this ourpose. The paper should be recognised for this purpose. The authors have also gone one step further by demonstrating the use of such a model in applications by segmentation the liver. """,3,1 midl20_108_2,"""This is a very interesting paper that demonstrates a strategy for building an ""ImageNet"" for medical imaging that can be used for transfer learning. The idea is to train a network to classify DICOM attributes, which are easy to automatically extract. They then show that transfer learning applied to a specific tasks improves performance compared to training from scratch. This is a great idea for how to generate a pre-trained model for use with general deep learning based medical image analysis. The idea is simple yet powerful, and the fact that the concept was tested on a common deep learning based medical image analysis problem - segmentation - makes this overall a great paper. It isn't clear why freezing all of the layers in the network would lead to a better segmentation performance. A more detailed discussion of this, or at least a reference to a paper that discusses this topic, would be beneficial. It would have been optimal if there was an experiment added that compared transfer learning performance using the DICOM tag labeling network vs a network trained on ImageNet. A high rating is given to this paper because it is likely to have a high impact. Medical imaging doesn't have it's own ""ImageNet"" that can be used for transfer learning from the domain of medical images. This paper shows how to do that and shows that it works well. The experiments are well designed and the results are impressive.""",4,1 midl20_108_3,"""- Built a 3D network to identify relevant images for training based on identifying different image characteristics (view, resolution, contrast) - Ran on TCIA to build a cohort for training a segmentation network to classify DICOM tags based on input images - Application to liver segmentation on a reasonably sized cohort. - Well motivated introduction explaining the technical need for building medical image specific networks and training cohorts. - Specific approach here is interesting, but unclear how the different networks come together - Good summary of methods and explanation of how each DICOM tag can be used - Fairly comprehensive evaluation of both networks, even if segmentation performance is on a small cohort - A little unclear how the DICOM-based network is to be used. Once you identify relevant scans they still need annotations to be used for training? Is the goal to have as many scans identified as possible? - Not clear why we are predicting DICOM tags using images.. is this in the case that the tag is not part of the image? - Not clear why 3D-RADNet is being applied to a liver segmentation dataset. - No statistical testing performed Interesting ideas and reasonable validation on 2 different cohorts. But exact use of network and experimental design are very unclear. Some limitations mentioned by authors fundamentally undermine the paper. """,3,1 midl20_109_1,"""The authors propose a data-driven method to predict the anatomical changes in the brain over time, encoded as a deformation field. The main contribution is to employ a neural network to predict the parameters f a diffeomorphic deformation field that morphs the images between exams, instead of directly predicting the anatomy at a later exam. The paper addresses an important challenge in neuroimaging and is well written and validated It is unclear if the method is predicting a deformation over time, this is , its evolution so that the intermediate deformed shapes between the initial scan and the follow on are meaningful or realistic, or if the method predicts the anatomy at follow up exam and uses a diffeomorphic field to regularise the problem or to try to give some intuition to what the neural network is doing. Is the time between both imaging procedures always the same? Authors indicate that there are some optional attributes (encoded in vector pseudo-formula ) that can be used, but it is not clear if this is actually used in the experiments, and what is the impact. How is this reflected in Fig 2 or Fig 3? Authors are say that they use the segmentations to train the model but Fig. 1 shows images ath both ends -Ithink this could be further clarified.""",4,0 midl20_109_2,"""The paper presented a way to incorporate subject-specific demographic and medical information along with the brain image scan to improve the deep neural network for predicting longitudinal brain deformation. The results demonstrated a level of effectiveness of the proposed approach, although I have some specific concerns and comments that listed in the detailed review shown below. *Major comments* The author didn't include the time difference between the training and predicted image in the ""vector of subject-specific medical attributes"". This raise my first question: It is not clear that, for multi-timepoint longitudinal data, whether and how how did the authors accounted for registration between different length of timepoints. Structural segmentation are used for generating evaluation metric (dice score and surface-distance). It's not clear how the segmentation is generated. If it's generated using image registration, then that means the registration-derived deformation field are considered as ground truth. Then why not using this deformation field directly for validation? Furthermore, if the the deformation field is the output to be optimized, should the in deformation field itself be used to calculate the lost function, and image similarity (the current term of the loss function) been used as validation metric? This way, no additional registration-based segmentation is compulsory during the evaluation This lead to my third point for critical question regarding the ultimate purpose of the study. In the introduction, the author claimed that ""providing an accurate prediction of the entire brain can give a richer phenotype for use in analysis or clinical practice."" It would be better to state/discuss more about what would be the ultimate end goal of this generative model. If segmentation is mainly used for generating clinical-relevant features such as volume or shape, would it make more sense to predict the longitudinal structural change (e.g. volume, shape, thickness, etc)? What's the comparative advantage of predicting the brain deformation, given that there are significant more parameters to train and fine-tune, which will make the generative model less robust. (E.g. Would the benefit be something like the ability to perform voxel-based morphological analysis?) Similarly, and furthermore, if better clinical score/diagnosis is the ultimate purpose, would this method provide better/comparable/worse results in predicting future clinical diagnosis, as compared to alternative generative models that predict clinical scores/diagnosis directly, given that the latter need even less parameters to trained with? *Minor comments* Some of the equations might need to be clarified a little bit more In the paragraph before equation (2), = Id + u_v : what is pseudo-formula and pseudo-formula ? In equation (4), it is not clear what does cst mean? I would suggest to add a little bit more details about how the ""subject attributes"" are concatenated to the baseline scan images, either in the text or in the Figure 1 legend. Are those attributes remapped to the size of the image and concatenated as additional channels (although the schematic diagram makes it seems like the images has been vectorized before concatenating into 1-dimensional tensor)? If so what happened if some subject doesn't have specific attributes recorded (such as the education level or clinical score)? Validation: Figure 2 and legend is a bit confusing. What does scan 0 mean? Shall *Ext* be referred to as the full set of ""subject-specific medical attributes"" rather than ""external data"" as stated in the legend? Why the upper bound (ground truth) name ""oracle"" The baseline of ""integration of the average registration velocity eld (mean) in the training set"" seems a strange baseline metrix to me""",3,0 midl20_109_3,"""This short paper proposed a method for predict future brain images with the use of deformation filed. The proposed method is interesting and focuses on an important problem. It is generally a nice work and some comments are as following: Pros: - The proposed method is technically valid. - This idea is promising and will likely be hot in the field of brain image prediction. Cons: - What are the clinical values of the proposed method? - What are the differences between this paper and Dalca et al 2019 (nips)?""",4,0 midl20_109_4,"""This paper covers an interesting topic in predicting the evolution (i.e. decline) of human brains for neurodegenerative disease. This is nice work, but the motivation is not well spelled out. It would be beneficial to clearer state that it might be useful to tell at a given point how a disease will develop. In the future it might be even possible to give the well known ""brain age"" by analyzing development curves. This clinical aspect is not well developed in the paper, but would greatly strengthen the significance of the described work. To me, it was unclear how the input data looks like. Did you use follow up brain scans from the ADNI study? If yes, what was the time between baseline and follow up? I also didn't get your comment on considering ""three baselines"". Do you mean you trained three different networks? Does the curve ""no change"" in fig. 3 mean that these are the dice coefficients of the original images? Again, I don'T think the term ""three baselines"" is well chosen. There should be one baseline and one follow-up scan. The results and the considerations drawn on them has to be more clear. I do not see the conclusion that additional clinical information improves the results being backed by presented data. """,3,0 midl20_110_1,"""The idea of learning prior to distribution on convolution kernels is methodologically sound and appealing. This new way of transfer learning could potentially be more effective than fine-tuning and L2 regularization (which basically is a zero-mean Gaussian prior). The preliminary results are reasonable. Now the authors should think about how to further extend and validate this work in the following two aspects: 1. how can the generative power of VAE be used in the segmentation model? Can you learn a family of DNNs to improve segmentation or quantify uncertainty? 2. how does the prior compare to other standard regularization approaches? """,4,0 midl20_110_2,"""The authors present a method to pre-train deep neural architectures for the purpose of medical image segmentation in the situation of small training datasets. This method is based on learning prior distribution of the CNN weights based on a generative model referred to as deep weight prior (DWP) proposed by Atanov et al. The authors propose to learn the kernel distribution from a source dataset consisting of MRI of multiple sclerosis (MS) patients and apply it to the task of segmenting brain tumors from the BRATS18 MRI dataset considered as the target domain. UNet is used as the backbone architecture. The proposed method is compared to three baseline methods, namely a model directly trained on the low sample target data (BRATS 18) based on standard random initialization (UNet-RI), a model whose weights are pre-trained on the MS Datasets (UNet-PR), and the UNet-PR model fine-tuned on the BRATS18 dataset (UNet-Prf). Results based on the intersection over union metric indicate that the model performs better that UNet-PR and UNet-PRf but comparably with UNet-RI. I have one major concern regarding the validity of the hypothesis grounding the DWP method proposed by Atanov et al. The authors indeed assume that the source and target kernels (networks weights) are drawn from the same distribution, so that the source kernel distribution that is learned can serve to perform Bayesian inference on the target data. I am not sure that this assumption holds for the source (MS) and target data (BRATS). The diagnostic tasks are indeed very different, so that, I guess, the kernels are likely to differ. I am not sure that the DWP is the best adapted for this specific transfer learning task. This may explain, why, as suggested by the authors, UNet-DWP does not perform much better than the random initialization (UNet-RI). Description of the DWP method as well as the source (MS) and target (BRATS18) datasets should be detailed, extracting some more details from the recently published paper in Frontiers in Neuroscience. """,3,0 midl20_110_3,"""The authors use deep weight prior to learn an implicit prior distribution over the weights to facilitate transfer learning. This allows the model to mitigate overfitting on the target task when limited labeled data is available. To evaluate, an MS dataset was selected as the source and small subsets ofBRATS18 dataset were selected as target. The evaluation was performed on a fixed number of target images but when having access to varying number of labeled data from the target domain. """,4,0 midl20_110_4,"""The paper proposes to apply Deep Weight Prior to the problem of transfer learning in medical imaging. The authors learn a U-Net on MS lesion segmentation and evaluate transferability to the BRATS2018 dataset. The use of DWP is well motivated and the results indicate improved performance over regular transfer learning. Following the results of the paper, I believe the use of DWP can improve settings in medical image analysis with only limited available training data but availability of related datasets. However, the authors should improve the explanation of DWP and introduce the used variables. For example, I assume that k$ in the equation on page 2 refers to the input and output channel dimensions of the convolutional kernels. It would be interesting to report Dice scores which are more commonly used for the BRATS dataset. It would be beneficial for the authors to release code or add training details to the appendix as the results seem to be irreproducible in its current form. Lastly, a longer study should test different freezing regimes for transfer learning, as freezing the middle seems like a rather arbitrary choice. Minor: - page 2, 1. dataset instead of dataest - page 2, the figure and the enumeration could use a little margin between them""",3,0 midl20_111_1,"""I found the paper easy to read and containing interesting results about how some baseline models ourperform more sophisticated ones. I am recommending acceptance, but I have some remaining doubts that I would like to be answered, if possible. Namely: 1) I do not see very clearly from the text what the authors mean by joint learning. I believe Figure 1 could serve the purpose of actually clarifying what is happening in each scenario; unfortunately, it has a very poor caption. Could the authors add a short description of each of the four schemes in that caption, and label them as a), b), c), and d)? I believe that would help a lot. 2) I don't understand this sentence ""the lesion class is under-represented"". I guess it is because of the use of the word ""class"", which makes the reader think about classification. Do you actually mean ""slices containing liver lesions were under-represented as opposed as lesion-free slices""? Because later in the text, the number of examples for each lesion class is mentioned, and they seem pretty balanced. Anyway, if what I am saying is the case, are you using weighted cross-entropy, rather you are oversampling slices with lesions during training?""",3,0 midl20_111_2,"""In this paper, the authors combine the advantages of joint learning and transfer learning to improve the performance on liver lesion segmentation and classification. Although the techniques are not new, it is good to use them in new applications. I am just curious about the performance in the following settings: (1) The segmentation performance on the authors' private data using the pretrained model from LiTS dataset. (2) The segmentation performance on the authors' private data if only finetuning a segmentation model instead of the joint model. Setting (1) can show us how much performance the finetuning can improve based on the good pretrained model from LiTS dataset. Setting (2) can show us whether the joint learning has a bad effect on the segmentation task.""",3,0 midl20_111_3,"""(+) An exhaustive validation of different experimental setups for liver lesion segmentation and classification on a self-collected database is provided. (+) Figure 1 nicely summarizes the tasks and experimental setups. (+/-) The paper is mostly well written. However, I would recomment to restructure Section 2. Your paper is of type well-validated application. Thus, first describing the given segmentation and classification tasks and the collected data and afterwards experimental setups seems more resonable to me. In general, the paper focuses too much on the methodology of transfer and joint learning to my opinion. (-) Your motivation is quite weak. Why is such an automatic classification approach required? Please specify in abstract/introduction. (-) No comparison to existing approaches on liver lesion segmentation (e.g. winners of the LiTS Challenge) is performed. (-) Transfer, joint and multi-task learning are well known approaches to deal with limited data. No methodical tricks are presented. (-) ""The first framework incorporates a multi-task U-Net, [...]"" This is confusing, as Figure 1 shows the multi-task approach as third framework.""",2,0 midl20_111_4,"""This work is based on a private dataset of 332 CT slices (not volumes!) from 140 patients with three different types of annotated liver lesions (cysts, hemangiomas, mets). They compare two multi-task approaches for segmentation and classification and two baseline (ablation, single-task) approaches on this dataset. Additionally, the encoders are either randomly initialised or pretrained on ImageNet (out of domain) or the LiTS dataset (same domain). The number of 2D slices being used is relatively small, which limits the contribution to some degree, but the setup is solid and the results definitely interesting. I missed some details on the architectures (numbers of filters, for instance) and possible image preprocessing. I also wondered if the number of resolution levels is really only 3, which would limit the receptive field (without knowing any details about the employed blocks, it is hard to guess, but it could be around 44 pixels theoretical maximum, the ERF being even smaller). It would also have been interesting to do an ablation study on the SE (squeeze & excitation) blocks, but at least they were used in all four compared approaches, so the comparison is fair. Overall, I would rate it between 3 and 4, but I do think it is a nice contribution to MIDL, so I voted 4 (""strong"" accept).""",4,0 midl20_112_1,"""The paper proposes a morphological signature for characterizing the quadriceps in MRI images. The utility of the signature is investigated in two cases, atlas-based segmentation and data augmentation for training a UNet. Experiments are performed on MRIs collected from 43 subjects. Results indicate that the morphological signature is beneficial in both cases. The proposed morphological signature is simple and, as shown in Figure 3, seems to capture important variation in morphology. Further, since it is defined on a single 2D slice, the annotation burden is relatively small. Defining good data augmentation for segmentations can be tricky. The proposed method, where a small effort leads to improved augmentation, could be useful for many segmentation tasks. Overall, the paper is not well-written and there are many grammatical errors. There are many sentences and paragraphs where the meaning is unclear and there are many places where information is missing. F.ex on page 3, second bullet point where ""center"" of various muscles heads are used to define morphological features. However, ""center"" is not defined. Same issue further down on p. 3, where ""features were centered and scaled"" without explicitly mentioning how they where scaled. These are minor things and I can make an educated guess. However, there are many examples of this and it should be possible to reproduce experiments in the paper based on information in the paper. I also miss information regarding UNet experiments, e.g. convergence criteria, that should be provided in a suplement. There are two parts of the paper, JLF and UNet. The two parts are mostly self-contained and it seems like the UNet part was added at the last minute. This adds to the impression of unfinished work. 7 out of 50 subjects are excluded based on visual low-quality of segmentation of the center slice. This sounds like all the hard cases where excluded. You should report the criteria for exclusion and provide illustrative examples. You should also discuss the implication for the reported results. In the data augmentation experiments you only use 10% of the slices for training. This seems like an incredible waste of data. Why do elaborate data augmentation instead of just using the data you have? Training on the full volumes without augmentation would provide a good baseline, as would augmention with random deformation fields. In general, I find interpretation and discussion of results to be lacking. - The polar coordinate features are stated to be rotation invariant. However, the results clearly shows that this is not correct. It might be that the variation is insignificant, but the plot in Figure 3 is not convincing in this respect. - Performance of JLF is measured using either all six atlasses or the three closests. You should also compare with three random, to verify that picking the three closest is actually valuable. - Performance of JLF variations is measured with three metrics. I wonder, is a difference in mean absolute distance of 0.03 mm meaningful? This is less than 5% of the width of a voxel. The Hausdorff distance indicates that large errors are made and you should focus on these errors. - Table 2 and 3 shows UNet performance. It is clearly worse in terms of Hausdorff distance. The explanation that this is caused by processing each slice individually in the UNet is likely correct. However, it is not at all clear that this is solved by a 3D UNet and in no way does it support that the UNet will have "" better generalization capability"". What it shows is that comparing dice is probably a bad idea in this case. - Figure 5 shows an example segmentation that clearly demonstrates that all methods fail. In all four cases, the errors are unacceptable for most uses, yet there is no discussion of this in the paper. The issues with clarity alone justifies rejection. It is crucial that research is presented clearly and precisely. When this is not the case the work is likely misunderstood and replication studies impossible. Similarly, results must be presented and analyzed in a manner that provides the best possible characterization. Failing to discuss the very severe errors in Figure 5 is alone reason to reject the paper. I do not have expertise in this specific application, nor have I reviewed the relevant litterature. However, the issues with this paper is primarily of a general nature, so I do not consider this relevant for the rating.""",1,1 midl20_112_2,"""This paper proposes morphological features to characterize similarity of quadriceps muscles across different subjects. The proposed features include volume and relative spatial relations captured by polar coordinates among the four muscle types. For application, the proposed morphological features were applied for atlas selection in multi-atlas segmentation and for data augmentation to choose less redundant data for deep learning based segmentation. Overall, the paper is well written. Using morphological features to capture inter-subject similarity is an interesting idea and has not been well studied yet. The proposed features are intuitive and seems to work well. Clarification needs to be improved for some part of the paper. The data used in this study includes some with manual segmentation and some with semi-automatic segmentation. For data augmentation, the paper mentions that both images with manual segmentation and images with weak automatic segmentation are used for training. However, based on the description in 5.1 and 5.2, it seems that only the images with manual segmentation were augmented and used for training. The images with semi-automatic segmentation selected for data augmentation were not really used for training. Instead, they were used to warp the images with manual segmentation. Some choices made in the paper are not well justified. Experimental validation is on a small data set. The idea of using morphological features to measure image similarity for atlas selection and data augmentation is interesting, which is different from the commonly used image-similarity based methods. Although the work was only evaluated on a small data set, the proposed method is likely to work on larger datasets as well. """,3,1 midl20_112_3,"""The paper presents both multi-atlas segmentation and UNet segmentation and applies the methods on real data. The JLF and CL are compared with U-Net with data augmentation. It reduces the segmentation time from 48 hours to 45 seconds. The quadriceps muscle segmentation is a good clinical application. To train the U-Net with limited data, the data augmentation is guided by morphological features. PCA feature space analysis is included to visualize the location, and distribution of the cell image.. The clinical value is introduced by reducing the processing time from 48 hours to 45 seconds. The methods are basically from existing approaches and been put together for muscle segmentation. So the novelty is limited. It is not clear if both multi-atlas and deep learning are fairly compared since multi-atlas literally is not a training/testing mechanism. The definition of naming (i.e., ALB) is not provided. The organization of the paper needs to be improved, especially the relationship between U-Net and JLF. The methods are basically from existing approaches, and been put together for muscle segmentation. So the novelty is limited. The method is not compared with the latest state-of-the-art segmentation methods. The size of training data of U-Net is very limited. Even with data augmentation, the variations might not be good enough to train a U-Net with correct info. """,2,1 midl20_113_1,"""This work introduces a Bayesian deep learning approach for solving Quantitative Susceptibility Mapping. Given a local field, the method generates a susceptibility map. Supervised and unsupervised learning are combined in order to generalise from healthy data to data with haemorrhage. The method is principled, fits with the underlying optimisation problem and achieves promising performance. - the manuscript is well-written. I would like to congratulate the authors for their clarity. - the proposed method is principled and relevant regarding the underlying optimisation problem. - the solution seems easy to implement and efficient in practice. Method - The author argue that, given the ""intrinsic ill-posedness"" of the problem, ""a prior term is needed"". However, the term with the prior introduced in Eq 8 is removed in the final formulation (eq 12). This means that you assume that the network inherently induces some constraints that are beneficial for your problem. I don't see why it would be the case. Moreover, I suspect, as explained later, that better results are obtained without the regularisation because your model overfits. - How do you estimate pseudo-formula in eq. 12? Experiments - it would seem that the unsupervised learning component via VI was trained and tested on the same data. If so, this is for me a major weakness on this work. The network may overfit on the testing data. Moreover, although you don't use any annotation for this task, this can be seen as an optimisation per subject. In this case, there is no clear advantage of using a deep learning approaches compared to other optimisation methods. The paper is clear and the approach is principled. The authors exploit Variationnal Inference with neural networks to solve the optimisation problem, which seems to be novel and a good idea given the formulation. However, I have concerns regarding the validation.""",3,1 midl20_113_2,"""This paper proposes a supervised Bayesian learning approach, namely Probabilistic Dipole Inversion (PDI), to model data uncertainties for the quantitative susceptibility mapping (QSM) inverse problem in MRI. This paper employs the dual-decoder network architecture to represent the approximated posterior distribution and uses the MAP loss function to train the approximate distribution when the labels exist. When having new pathologies in test data, the proposed method minimizes the KL divergence between the approximated posterior distribution and the true posterior distribution based on the variational inference principle to correct the outputs. Experiments show that the proposed method can capture uncertainty compared to other methods. 1. This paper proposes a supervised Bayesian learning approach, namely Probabilistic Dipole Inversion (PDI), to model data uncertainties for the quantitative susceptibility mapping (QSM) inverse problem in MRI. The motivation is good. 2. Experiments show that the proposed method can capture uncertainty compared to other methods. 1. The authors point that they use the forward model in Eq.5 for computation in this paper. However, PDI-VI1 and PDI-VI2 represented by Eq. 11 and Eq. 12, respectively, use the forward model in Eq.4. The authors should unify the expression form according to the actual situation. 2. For the unsupervised variational inference case, it is not clear whether the Fourier matrix pseudo-formula , the dipole kernel pseudo-formula , and the noise covariance matrix pseudo-formula in the likelihood term are parameters that need to be optimized or have been determined before training. 3. This paper uses the dual-decoder network architecture to represent the approximated posterior distribution. Its better to provide the specific network architecture adopted in the experimentrather than just a simple schematic. 4. This paper uses three quantitative metrics, namely RMSE, SSIM, and HFEN, to measure the reconstruction quality. Please give the full names of the three quantitative metrics. Other papers (Yoon et al., 2018; Zhang et al., 2020) also use the peak signal-to-noise ratio (pSNR) to measure the reconstruction quality. The performance on the pSNR is better to show in the experimental results. And please further explain why the performance of MEDI is better than PDI and QSMnet on the HFEN metric. 5. This paper states that the experimental results show the proposed method yields optimal results compared to two types of benchmark methods: deep learning QSM (Yoon et al., 2018; Zhang et al., 2020) and maximum a posteriori (MAP) QSM with convex optimization (Liu et al., 2012; Kee et al., 2017; Milovic et al., 2018). And they compare PDI with MEDI (Liu et al., 2012) and QSMnet (Yoon et al., 2018). It is better to add experiments that compare with other more advanced benchmark methods, such as FINE (Zhang et al., 2020). 6. Figure 3 shows the reconstructions and standard deviation maps of two ICH patients. please explain what the red rectangle highlights for a better understanding. 7. The presentation should be improved. For example, In this paper, we come up with a framework by combining Bayesian deep learning to model data uncertainties and VI with deep learning to approximate true posterior distribution , and we developed a Bayesian dipole inversion framework for quantitative susceptibility mapping by combining variational inference and Bayesian deep learning. , VI is just an inference method, which could be included in Bayesian ML or Bayesian DL. This paper proposes a supervised Bayesian learning approach, namely Probabilistic Dipole Inversion (PDI), to model data uncertainties for the quantitative susceptibility mapping (QSM) inverse problem in MRI. The motivation is good and this is an interesting application of Bayesian learning. The results look good, while some details are unclear. The presentation can be further improved.""",3,1 midl20_113_3,"""A Bayesian approach to solving quantitative susceptibility mapping (QSM) inverse problem in MRI is proposed. The authors propose to approximate the posterior distribution of tissue susceptibility using a diagonal-covariance Gaussian with mean, variance predicted by a neural network. The overall framework is reminiscent of VAEs, except with a known generative model given by the physics of the problem. From that analogy the inverse problem is approached as that of learning an optimal encoder. The approximating network is pretrained in a supervised manner on healthy subject data with known susceptibility, local field pairs; and fine-tuned on test subjects using KL-divergence minimization. The experimental validation is still preliminary, conducted on the order of 20(?) subjects. The paper is very well written, the introduction and description of the method are of high quality. The approach is sound. The experimental validation seems limited but the results that are shown are again, well presented and interesting. I have limited knowledge on the application itself and cannot fully judge, but the paper provides the necessary material to understand the task and challenges at a high level. Overall the paper is very pleasant to read and the content of the paper / validation is well aligned with the original claims. I do not really have important weaknesses to point out. I have a few minor questions that come to mind about choices made in the paper, but they are not essential to address in the rebuttal (see comments). The paper is very well written, the introduction and description of the method are of high quality. The approach is sound. The experimental validation seems limited but the results that are shown are again, well presented and interesting. I have limited knowledge on the application itself and cannot fully judge, but the paper provides the necessary material to understand the task and challenges at a high level. Overall the paper is very pleasant to read and the content of the paper / validation is well aligned with the original claims.""",4,1 midl20_114_1,"""The paper proposes a laplacian pyramid-based CNN for reconstruction of MR images from undersampled k-space data to accelerate MRI acquisition. The authors have demonstrated using a Laplacian pyramid-based scheme to recover undersampled k-space data and reconstruct MR images. Results with other state-of-the-art methods show an improvement in PSNR and SSIM on a brain MRI dataset. An approach to reconstruction undersampled MR images and accelerate MR imaging, which results in higher PSNR and SSIMs on a brain MRI dataset, compared to other state-of-the-art approaches like U-Net, Cascade-Net, and PD-Net. The paper lacks critical details on the network architecture---the loss function used, the architecture of convolutional layers, a well-structured and well-formed figure representation the network, the cascaded structure of the proposed architecture, the datasets used---to name a few, as well as an ablation study, both on the width of the CNN as well as the cascaded architecture. While the results do indeed beat state-of-the-art, I believe it is not straightforward to reproduce the results from the manuscript in its current form. The pipeline also includes an inverse Fourier transform and it is not clear whether the entire network is trained with backpropagation, and if so, how. While the results do beat state-of-the-art, I believe the manuscript can be accepted after some critical revision in terms of description of methology and datasets. In my opinion, reproducibility of a paper strengthens the results of the paper. """,2,1 midl20_114_2,"""The study presents a new framework for zero-filled MRI reconstruction. The framework is built on a two-way backbone, simultaneously processing from its Laplacian pyramid decomposition and a downsampled version of the input signal. It is built in an end-to-end deep learning model involving complex convolutions. Experiments are performed on a in-house dataset with 2D images from 22 patients, on which the proposed approach is reported with the best performances compared to state-of-the-art community approaches. Substantial effort is done to formulate the studied problem. There are great visual supports including deep learning architecture pipeline and sample results. Authors also make efforts to explain the theorical bases of the components used in their approach, such as complex convolutions or complex Laplacian pyramid decomposition. Experiments report conventional measures for this task. From an evaluation perspective, all results are derived from an in-house dataset which makes the paper unreproducible. Most importantly, there does not seem to be any validation set, while the paper proposes a new architecture (i.e. an optimization hyper-parameter). The paper suggests that experiments are on 2D images, while competing approaches such as KIKI-Net report experiments on 3D images. If experiments are on 2D, authors should specify whether the training/testing split was done patient-wise or if images from a patient can be in both sets. The major drawback of the paper is the experiment evidence from the designed setup: experiments are in 2D and on in-house data including few patients, with no validation set. Although state-of-the-art methods are reimplemented, there are no direct comparison on publicly available data.""",2,1 midl20_114_3,"""This paper proposes a Laplacian pyramid based complex neural network for fast MRI imaging. The proposed deep networks contains two important component: Laplacian pyramid and the complex convolution, which are existing work. The authors compares the proposed network with several existing methods and shows its better performance. The paper tries to improve MRI reconstruction with Laplacian pyramid and complex convolution. The paper compares the proposed methods with several existing methods. The paper is well organized and presented. 1. The paper fails to cover a series of recent work on MRI reconstruction with Generative Adversarial Networks including: *Shende, Priyanka, Mahesh Pawar, and Sandeep Kakde. ""A Brief Review on: MRI Images Reconstruction using GAN."" 2019 International Conference on Communication and Signal Processing (ICCSP). IEEE, 2019. *Quan, Tran Minh, Thanh Nguyen-Duc, and Won-Ki Jeong. ""Compressed sensing MRI reconstruction using a generative adversarial network with a cyclic loss."" IEEE transactions on medical imaging 37.6 (2018): 1488-1497. I believe GAN-based MRI reconstruction could alleviate the blurry issue in reconstruction, but the authors have not included any reference, discussion or comparison with such methods. 2. The authors do not give any ablation study of the proposed model. Why combining Laplacian pyramid and the complex convolution could improve the performance? Which of these two components play an important role and whether both components improve the performance? The paper fails to mention a series of related research with GAN on MRI imaging. No ablation study is given to analyze the proposed model. Then I could not give any rating beyond weak reject, unless the authors improve the paper.""",2,1 midl20_114_4,"""This paper proposes to learn Laplacian pyramid based complex neural network (CLP-Net) for high-quality image reconstruction from undersampled k-space data. The goal is to accelerate MR imaging. The experimental results on in vivo datasets show that the proposed method obtains better reconstruction performance than three state-of-the-art methods. 1) a new framework for MR reconstruction from undersampled k-space data has been proposed by exploring the encouraging multi-scale properties of Laplacian pyramid decomposition; 2) a cascaded multiscale network architecture with complex convolution has been designed under the proposed framewor 3) The experimental validations on in vivo datasets have shown higher potential of this method in preserving the edges and fine textures when compared to other state-of-the-art methods. No notable weakness identified. I support this paper due to the novel cascaded multiscale network architecture using complex convolutions, and its strong performance on in vivo datasets in preserving the edges and fine textures. See above. I support this paper due to the novel cascaded multiscale network architecture using complex convolutions, and its strong performance on in vivo datasets in preserving the edges and fine textures.""",3,1 midl20_115_1,"""A novel registration algorithm is proposed. The main innovative element is the use of a recently proposed method to estimate mutual information (Belghazi et al). It's a very good idea to test this approach in the context of medical image registration. Two other novel elements of the method that are highlighted in the paper are the transformation model (using matrix exponential to parameterize a rigid transformation matrix), and a particular multiresolution approach. The method is compared to 6 different algorithms on two public datasets with ground truth. The performance is promising. - It is an interesting idea to apply the method of Belghazi to image registration. - The transformation model based on matrix exponential seems elegant. - The method is compared with 6 other algorithms. - The method is evaluated on two public data sets (FIRE and ANHIR). - The results of the comparison with other methods are difficult to interpret because it's not only the similarity measure that's different across methods, but also many other algorithmic components, such as optimisation method, transformation model, multiresolution approach, number of image samples to compute the similarity measure, etc. - The manuscript lacks focus because it introduces three contributions at once. The ablation test in the appendix sheds some light on their individual added value, but such information is essential and should not have been hidden in the appendix. - Eq 5, which assigns a separate transformation to the first level of the multiresolution pyramid seems quite ad hoc: why only for the first level? And if there is a separate transformation for the first level, then how does it still influence the other levels? It seems that the estimation of v^1 becomes independent from the estimation of v. - The paper lacks some references and related discussion. Matrix exponential for modelling transformation was already proposed by Wachinger & Navab, IEEE PAMI 2013. Simultaneous multiresolution for image registration was investigated by Sun et al, IEEE Transactions on Image Processing, 2013. - The paper is essentially too long, and the appendix contains information that should have been part of the main manuscript. Timing results and the ablation study are examples. - The comparison of timing results is confusing because of the many differences in implementation details between methods. The idea to test this novel algorithm for estimating mutual information in an image registration framework is good, but the current manuscript raises too many concerns regarding experiment design and clarity of presentation.""",2,1 midl20_115_2,"""The paper address the image registration problem by using the existing MINE neural estimator for MI and matrix exponential for transoformation matirix. The novelty is to utilize the existing MINE neural estimator for mutual information computation and matrix exponential for rigid body transformation optimization for image registration application. The experiments are shown on ANH and FIRE datasets with standard NAED metric. - The paper is very easy to follow. - The idea of using the MINE neural estimator for MI in case of multi-modal image registration is straightforward. The idea utilizes the end to end neural network for image registration problem. - Shows a way of overcoming histogram based MI estimation method by using MINE approximation method using neural network for image registration applications. While the paper focused on MI metric for image registration, the paper lacks comparisons and citations to a wide range of recent deep learning based image registration algorithms in literature. For e.g. - ""Semi-Supervised Deep Metrics for Image Registration"" Alireza et al 2018. - ""Deep similarity learning for multimodal medical images"" Cheng et al 2016. - ""Networks for Joint Affine and Non-parametric Image Registration"", Shen et al 2019, - ""Recursive Cascaded Networks for Unsupervised Medical Image Registration"" Zhao et al 2019 The above mentioned papers for me are quite relevent to the motivation of the paper and show the comparisons of different metrics utilized in image registration for other than MI. The experiments were done only to basic metrics. Also the number of bins in AMI (default 64) and sigma is not mentioned in the comparison. These two parameters are quite sensitive and may change the reported results which needs to be fine tuned for the reported result. Also sentence like ""MINE is differentiable because it is computed by neural networks"" is very unwieldy for a scientific paper. 1. The paper is not much novel but utlizes the existing Mine algorithm for a MI metric based image registration 2. The paper lacks proper validation and comparison with respect to recent neural network based image registration methods. 3. The paper lacks citations to existing vast literature on similarity based image registration methods.""",2,1 midl20_115_3,"""In this paper, the authors present a new multimodal image registration algorithm which measures the similarity of the registered images using and approximation of the mutual information. The contribution of the paper is to use the neural networks-based mutual information approximation method of (Belghazi et al., 2018) for this approximation. They also make it multi-scale by comparing the images at several scales simultaneously. The presented similarity metric is shown to compare well with other classic metrics such as the Mattes Mutual Information, the Normalized Mutual Information or the Normalized Cross Correlation for instance, on the FIRE dataset (Hernandez-Matas et al., 2017). I think that the idea to learn on the fly an optimal similarity metric to compare multi-modal images at different scales simultaneously is interesting. The results also seem promising. The bibliography is sufficient for a conference paper. -> Compared with Eq. (4), the effect of the additional term in Eq. (5) is far to be clear. Its effect should be assessed at least in appendix. Why also using the additional term pseudo-formula at the first scale only? How to constraint them to remain small? -> Although the parameters of Eqs. (4) and (5) are clear, the parameters pseudo-formula of Eq (8) optimized by the N.N. are not clear. Are they the same for all scales? If yes, how to handle simultaneously images at different resolutions with the same parameters? What is also the meaning of this similarity metric which compares different representations of the same image. -> What is the meaning of pseudo-formula in this paper? To be honest, I can't understand from the paper what is optimized by the neural network in practice. Clear discussions should be given to explain the link between the formalism of MINE and the optimized energy Eq. (8). -> In the results section, the results obtained Table 1 using the approximation of the mutual information are about four time more accurate than those obtained using the true mutual information. This is really suspicious from a scientific point of view. It deserves extensive discussions. The paper addresses an interesting question and the result look promising (though suspicious). I think the the authors should clarify their methodology before presenting it at MIDL. The meaning of the learned multiscale similarity metric is indeed not clear at all.""",3,1 midl20_115_4,"""This paper presents a new registration method based on differentiable mutual information (MI). The first novelty of this paper is to use a recently proposed MI estimation method called MINE (mutual information neural estimation) that can estimate a lower bound of MI. Importantly, MINE is differentiable and so this can circumvent the drawbacks of traditional histogram-based MI computation. The second novelty of this work is to use the transformation matrix via matrix exponential of a linear combination of basis matrices. The experimental results demonstrate superior registration performance over other traditional methods. -The use of the differentiable mutual information tackles some of the drawbacks of traditional histogram-based MI computation. -The proposed method computes the transformation matrix via matrix exponential of a linear combination of basis matrices, which accounts for affine transformation. -Matrix exponential makes the optimization process smoother. -The proposed method was evaluated on currently accepted standard similarity measures, demonstrating superior performance. In the method, the transformation is designed to capture affine transformation only, and extending it to deformable one may not be straightforward. In the validation part, it is necessary to show the advantages of both MINE part and the transformation - This work has sufficient contributions (e.g., the computation of mutual information alongside the design of the transformation matrix to account for the affine transformation) to the registration community to move forward.""",3,1 midl20_116_1,"""The paper presents a method to generate mamography images with styles from different acquisition devices (Hologic, Gioto, Anke). The main motivation is to adapt different images to the preference of the expert reader. The idea is interesting but the paper lacks any information regarding the training process or the training dataset used to generate the transformation models. Also, the experiment with the two experts contains a small number of images and the distribution of the classes is not described. The results does not allow to claim that using the style generation model by generating images with the experts preference improves the experts performance.""",2,0 midl20_116_2,"""Different mammogram vendors have different proprietary algorithms to post-process raw photon counts in digital mammography. This short paper proposes to learn to switch from one post-processing to another by learning such algorithms with a CNN. I find the idea interesting in some sense, but I have some doubts regarding both the technical side and the real impact of such tool, if it were to be further developed. The architecture is more or less well detailed, but no mention on any training detail is made, nor the data used to learn these transformation is described. Did the authors have a database of raw mammograms and corresponding post-processed ones for each vendor? What was the resolution that their network admitted as input? The output was a low-resolution mammogram that was then upscaled to the original resolution (which was probably quite large)? If that is the case, that would be quite concerning, as objects of interest in these scans can be of a very small size (micro-calcifications). Regarding validation of the technique, by just looking at a small image in Fig. 2 one cannot say anything about this technique. There is a bit of a comment of showing 10 mammograms to two experts, but just looking at the accuracy of these two experts on only ten mammograms is very weak evidence of usefulness of this technique, more so considering that introducing this in a clinical workflow would lead to experts taking quite more time to read a scan. On the other hand, at the very least I was expecting that a numerical comparison between the output of the network and the reference mammogram (in terms of SSIM or other metrics commonly used in reconstruction or super-resolution papers). In my opinion, this abstract lacks essential details to really understand the correctness and interest of the proposed technique, and the validation is too weak at this moment.""",2,0 midl20_116_3,"""The problem addressed is how to map a raw mammogram (the data as measured) to an image suitable for viewing by the radiologist. This is by itself an important task. However, to my knowledge, the most difficulties radiologists have is when they have to judge priors made with a different processing, not so much the images themselves with a specific processing (although they have a preference). Furthermore, what is lacking: - How is the model trained? - What vendor and machine was used specifically? - How is the ground truth obtained? - What image resolution was used, and does it generalize over different detectors? - The reader study is not convincing. What pathologies were present? Who were these expert readers? How does the model perform with calcified lesions or DCIS lesions?""",2,0 midl20_116_4,"""The paper proposes a CNN based method where a switch-map is used to generate multiple enhancement style from raw mammography images. The idea of the paper is simple and interesting. The paper lacks some of the details like how the switch map is generated, how the training of the network was done, were paired images from different mammography devices were used to train the network? etc. The quantitative results are not convincing and the details about the dataset is missing.""",2,0 midl20_117_1,"""The manuscript proposes a 2-stage transfer learning strategy for the classification of epithelial ovarian cancer subtypes. The approach takes 10241024 patches from the whole slide image and downsample the patches to 256256 to train a Stage-1 network in VGG-19 structure. The network is then embedded into a Stage-2 network with downsampled 512512 as initial input. Results show that the proposed 2-stage strategy outperforms a baseline VGG-19 network and a Stage-1 network alone at whole slide level. The manuscript is written clearly and the proposed approach shows its effectiveness. Concerns that need to be addressed are: 1. The Stage-2 network generates mixed results at patch level according to Table 1, while when integrated into whole slide level prediction the performance is improved significantly. Explanations are expected. 2. It is worth evaluating the performance of training the Stage-2 network from scratch. 3. It is unclear whether the patches fed into the baseline VGG network are downsampled or at original resolution. 4. How will the model perform if 512512 patches are used at original resolution? """,3,0 midl20_117_2,"""- Reasonably well motivated problem, even if the clinical relevance isn't mentioned. - Well described approach for 2-stage deep learning that trains on low and high detail images of the WSI. - Comparison shows improved performance over standard patch based approach, though that too appears to have been run in a 2-stage setting. """,4,0 midl20_117_3,"""In the paper, the authors presented clear goals and well-designed experiments. Both, experiments and methods are well described and easy to follow. The paper proposed a multi-resolution approach that is using two DL models and shared weights. The proposed approach is interesting. The authors achieved decent results with the Kappa score equal to 0.8. Adding the following information could be useful for readers: - How many areas were annotated per slide? - How the random forest classifier was trained (on which dataset)? - Missing references to the Lanczos filter. """,4,0 midl20_117_4,"""This paper is well written, the methods are appropriate, sufficient detail is provided to enable replication, and the results are presented clearly. This paper could be improved by expanding on the motivation for this work, including information about how ground truth labels were obtained, and contextualizing this study with previous literature. Detailed comments: No motivation is given for how better classification of these subtypes would improve patient outcome. The rationale for the two stage approach seems to be to incorporate features learned on the low-resolution space in learning the high-resolution space, and for using information from multiple scales to classify the patient. This approach is appropriate for the domain (pathological analysis) and is explained clearly and completely here. However I wonder why other approaches that use information from multiple domains, such as U-Net, were not considered. Did all the images come from the same hospital? At what micron-per-pixel resolution were images digitized? An important missing piece of this manuscript is a description of how the image class labels were determined. If this was done by an objective method, such as molecular analysis, there is no problem and a statement explaining the label origin can be added. However, if the class labels were based on morphological analysis, there is the problem that the labels are not true ground truth. If the ground truth labels are obtainable, such as through a molecular test, motivation needs to be provided for why this study is necessary. With unbalanced classes, measures such as sensitivity and specificity or true positive rate and true negative rate should be reported instead of accuracy. There are multiple methods for calculating AUC in a multi-class problem. The authors should state the method they used. The authors state that their method outperformed the baseline method. This is only true in the slide-level case. I agree with the authors implicit assumption that patch-level metrics are less clinically important than slide-level metrics, but this should be made explicit and justified. The conclusions of this work, that the two-stage method is better than conventional approaches, would be strengthened by referencing past publications that reported worse performance in this task.""",4,0 midl20_118_1,"""The authors proposed a GAN-based method to synthesize and remove lesions on mammograms. The GAN model uses U-Net design with self-attention module and semi-supervised training loss. The authors augmented their original training set from OMI-DB dataset with the GAN-generated samples, and demonstrated improvement compared to their baseline model on patch-level malignancy classification on a test set of real mammogram data. The paper is well written. Most methodological details are clearly documented, and well justified. Besides classification tasks, the authors also performed a t-SNE embedding analysis to understand how the real and synthetic data are clustered and the effect of the augmented model. I am mostly concerned with the lack of comparisons in evaluation. There are a few papers on GAN-based synthesization of mammograms (for example, Ref [13]). The authors did compare their proposed method with any previous work. And there is no comparison to state-of-the-art breast cancer classification performance on OMI-DB dataset (besides a simple baseline model in the paper). The paper is well written. Most methodological details are well justified. But it lacks comparisons to previous work (i.e. previous GAN-based methods for mammogram synthesization, and state-of-the-art breast cancer classification methods) in the evaluation. """,3,1 midl20_118_2,"""This paper presents a generative adversarial network for augmenting lesion or lesion-free mammogram images for addressing class imbalance issues in classification. Specifically, it proposes to use self-attentive and semi-supervised learning, considering contextual information in the breast tissue build on U-Net like architecture. The problem is clearly stated. It sounds a good idea to synthesize more samples based on available mammogram data in clinical practice to solve the traditional class imbalance problem. Visual qualification for the results is well presented. There are some unclear parts in the method section. It will be great this becomes clearer to readers: how the synthesized image could be made realistic. Following the self-attention module, it seems that the location of the generated lesion is based on the attention detected in the input image. Does this make sense all the time? Also, how to remove the existing lesion is not explained in the method. In the experiment, I think it is meaningful to compare the classification accuracy between the proposed synthesized dataset and balanced but smaller real datasets to see the effectiveness of the proposed method. Although this paper has a contribution to the research field, it still has room to improve and clarify (as detailed in Weaknesses). Especially, the ad hoc parts on page 4 could be revised for clarity. """,3,1 midl20_118_3,"""This work used a U-net like structure to GAN to add/remove lesions from healthy/cancerous mammogram. In this work, the authors add self-attention and apply progressive GAN training to achieve a proper result in their standard. It is an innovative way to tackle the scarcity in lesions of mammography by utilizing a large amount of healthy data. The result shows that the performance is at least on par with the baseline they choose. 1. Synthesize proper lesions instead of synthesizing mammograms with lesions is a great way to simplify the targeted issue. 2. Extending the binary cross-entropy loss of the discriminator to a four-way output is an interesting way to perform class-wise discrimination. 1. The threshold 0.1 in the post-processing seems important. However, there is no clear explanation that why they choose this value. It is more persuasive to do a sensitive study on the selection of the threshold. 2. t-SNE embedding does not reveal anything in the term of how data points distributed in hyperspace Showing this embedding would not help to disclose the effect of the augmented model. It is only an intuitive tool to show how data would cluster in a high dimension. 3. The authors do not discuss why would the performance peak at 50% w/decay or the effect of synthetic image rate that included in the training data. I find this work is promising but limited in the validation and some analysis in the result. I hope the authors can improve what left behind. If this work is fairly evaluated, it would contribute to the topic of the image synthesizing in the medical field.""",2,1 midl20_118_4,"""The novel contribution of the paper is a GAN based approach for data augmentation in mammography images to aid in the classification of benign vs. malignant breast lesions. Given the pervasive problem of lack of datasets especially with balanced proportion of lesions vs. non-lesions in medical image sets, an approach to produce realistic augmented sets with variations from the original data is highly relevant and important for improving the generalization capability of the models trained in medical image sets. Pros: 1. While GAN based augmentation itself is not new (e.g. Choi et.al ICCV 2019), GAN based data augmentation in this application is new. 2. The presented visual result seems qualitatively convincing. 3. Experiment to assess the value of augmented sets as proportion of augmented vs. real set is useful. Cons: 1. There isn't anything methodologically new in this approach. The use of semi-supervised classification (Odena), gradient penalty loss, self-attention modules have all been used although in other contexts. 2. A bigger issue is that the approach seems rather over-complicated. Particularly, the choice of three different networks; authors say that use of the z variable to select between the different categories didn't work, but why this didn't work is not explained. I wonder if this has to do with how the training was done, or with the use of an over-complicated network there wasn't enough data for obtaining generalizable training. Experiments to explain this choice is necessary. 3. What is the use of attention module is not clear. It seems like its just an add on that would not really have an useful effect. From the shown examples, small patches are selected and a random location is chosen for adding lesions or calcifications. What is attention module doing here? You don't really need it for anything. This should be explained and backed up with ablation experiments. 4. Lesion removal explanation is very confusing. It almost reads like you do a lesion detection and then remove it. Is this correct? If that's the case, why then do you need a GAN? You could just do it with a lighter weight and easier to optimize feedforward segmentor. Please explain and clarify this method. A GAN based data augmentation approach applied to mammography image classification is potentially useful. Given the pervasive problem of lack of datasets especially with balanced proportion of lesions vs. non-lesions in medical image sets, an approach to produce realistic augmented sets with variations from the original data is highly relevant and important for improving the generalization capability of the models trained in medical image sets. The paper is lacking in a lot of details related to the method, e.g. the self-attention implementation, spectral normalization - this is just mentioned but never expanded. The implementation and training details could be expanded a bit more. There isn't anything methodologically new in this approach. The use of semi-supervised classification (Odena), gradient penalty loss, self-attention modules have all been used although in other contexts. The rationale for semi-supervised classification is briefly presented but not validated in the results. The same goes for the gradient penalty loss, and self-attention modules. Why do you even need a self-attention in these kinds of images. There doesn't seem to be any specific region other than the one already indicated randomly to produce a lesion image. Gradient penalty is mostly to prevent mode collapse issues. How stable is the convergence with and without gradient penalty. These results should be presented. Lesion removal method is confusing or wrong. Also comparison to other approaches such as GAN-based methods are missing. Why is this method better and what aspects or modules used in this architecture offer improvements. There isn't anything methodologically new. The paper combines several well-known techniques. Although this by itself is not a strong negative, the rationale for adding the individual components to produce such a seemingly over-complicated data augmentation approach should be validated with experiments. Also comparative experiments to other GAN-based data augmentation methods would be helpful. The lack of such experiments makes this paper less exciting. """,2,1 midl20_119_1,"""Quality: mediocre - while there is an interesting idea regarding comparing simulated MRI data to real scans to detect errors in segmentation the practical implementation in neuroimaging studies not realistic with the FPR shown in the results Clarity: The paper is clearly written, but there is a lack of details in the data description. While three studies are mentioned it is not clear how exactly training and test data were divided and also parameter selection is not described at all. Originality: I am not completely familiar with all the literature they cite but it seems that their idea comparing simulated MRI data to real scans to detect errors in segmentation is novel - but this might also have been presented at another conference before. At least this is not widely known. Significance: While a solution for QC of brain segmentations is a very important problem, th paper does not present any significant practical solution, hence it's significance is low. Pros: - the need for good QC tools in MRI neuroimaging is of high significance - the idea to detect segmentation errors from differences between synthesized and real MRI seems to be novel Cons: - practical implementation in neuroimaging studies not realistic - as an exhaustive freesurfer user, I am aware that sensitivity/recall not the only thing that's important, but rather the false positive rate or precision; for every chnage in a segmentation FreeSurfer needs to be run again, so a high rate of false positives will make this procedure not practical compared to visual QC - the errors shown in Figure 2 d) that cannot be fixed with the method are some of the most crucial errors when doing cortical segmentation with FreeSurfer - the proposed method uses FreeSurfer, but doesn't utilize the real power of FreeSurfer by using the surface parcellations; in principle it does not utilize FreeSufer to its full extent if the cortical ribbon quality is judged by segmentations instead of surface parcellations - the way the training and test data are described is not clear - the description of the parameter selection is lacking """,1,0 midl20_119_2,"""The authors propose a method for brain MRI segmentation quality control (QC). Their method makes use of pix2pix to generate a synthetic MRI from the segmentation result and then compares the synthetic and original MRIs to create an error map. This error map and the original MRI are then input to a CNN that classifies the result as either good or bad. Strengths: - Using pix2pix to generate synthetic MRI in order to create error maps is interesting and can be used to localize areas of segmentation error, and initial results appear promising. - Training (n=1,600 subjects) and testing (n=~800) set sizes are large, demonstrating robust evaluation. Weaknesses: - Evaluation of the segmentation error maps is limited to qualitative visual inspection. Some form of qualitative evaluation would be useful. - It is unclear how the training/testing images were scored and under what criteria, which would be useful for understanding the good/bad rating system. - While segmentation QC is a valuable tool, summarizing the entire segmentation quality into a single binary good/bad may not be useful for practical use. It is very subjective to say that something is good/bad, for example Fig 2a and Fig 2c show dramatically different severity of segmentation error. This amount of error may be more valuable than binary good/bad, and let the users decide their tolerance for error. This is an interesting topic, and I generally like the pix2pix approach to compare against the original imaging as a way to get to a segmentation error map; however, I am less enthusiastic about second half of the proposal, the classification methodology, as I think binary evaluation does not necessarily have straightforward clinical utility. Instead, it seems that quantification of the error map as a measure of quality would be more useful. """,2,0 midl20_119_3,"""This short paper proposes a method for detection of segmentation errors. First, a network (cGAN) is trained to predict the original image based on its segmentation. Differences between the predicted image and the original image are an indication of segmentation errors. A second network takes this difference image as input, together with the original image, and predicts whether the segmentation was acceptable or not. Evaluation results are promising. Strengths: - Relevant topic - Method seems original - Promising results. Weaknesses: - Some motivation for the method is lacking. Why not directly train a classifier that takes the segmentation and the original image as input, and predicts whether the segmentation was acceptable or not? Why do we need to first predict the original image? Experimental comparison to such simpler approach would have made the paper stronger. - The experimental setup is a bit unclear. Specifically, it is not clear whether the final class-balanced dataset of 300+300 subjects had overlap with the previously mentioned datasets of 1600/600/190 subjects. This would be suspicious, since the cGAN was trained on part of that data. """,3,0 midl20_119_4,"""This work presents a deep learning approach for error detection of automated segmentation pipelines. The model uses the previously published pix2pix conditional GAN model to learn the original image from the segmentation. In the test phase the predicted image is compared to the original image using a CNN, and an error map is generated. Novel approach, nice initial validation. The methods shows good performance, but some false positives that should still be addressed. """,4,0 midl20_120_1,"""key ideas: In this article, the authors tackle the clinically important problem of reconstruction of accelerated MRI. They use the concept of knowledge distillation to reduce the number of parameters and consequently running time for reconstruction of undersampled MRI data, which is an ill-posed problem. They also compare different ways of knowledge distillation. experiments: The authors examine performance of their proposed algorithms and other earlier algorithms on three different datasets - brain, cardiac and knee MRI - with different subsampling factors (4x,5,8x). While the experiments themselves are set up nicely, the results are a bit disappointing. In the qualitative results the differences are very small and almost invisible. Regarding the quantitative analysis, the evaluation metrics used (Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) metrics ) are completely appropriate for the task, but there is largely no difference between the proposed method that employs KD and a regular student network. Also the authors state they will use the Wilcoxon signed rank test with an alpha of 0.05 to assess statistical significance, but this is never done. significance: The clinical problem of that the article addresses is very relevant, but the experiments are not convincing enough for me that the knowledge distillation really gives enough of a performance boost. The paper tackles an important problem in MRI. Reconstruction of undersampled k-space data is a challenging problem and there have been a lot of attempts in solving it. The paper adds to the previous approaches by trying to reduce the number of parameters fitted and computational time with a knowledge distillation approach. While the approach is interesting and the experiments seem to be performed fine, the results are a bit disappointing. In the qualitative results the differences are very small and almost invisible. Regarding the quantitative analysis, the evaluation metrics used (Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) metrics ) are completely appropriate for the task, but there is largely no difference between the proposed method that employs KD and a regular student network. Even the performance of a simple zero-filling is not that far off. This should be clearly addressed in the discussion. Also the authors state they will use the Wilcoxon signed rank test with an alpha of 0.05 to assess statistical significance, but this is never done. The submitted pdf was way over the recommended 8 page limit- the actual paper ran onto the 10th page and with appendixes the pdf had 18 pages. Important things, such as the quantitative results were hidden in the appendix which is not acceptable. The paper is generally nicely written and addresses an important problem. The authors have also a nice idea for improving the current solutions on the issue of undersampled MRI reconstruction. The setup of the experiments is nice, but the results disappointing. Also the actual presentation of the results and discussion is subpar. """,3,1 midl20_120_2,"""In this work, is proposed a knowledge distillation framework for MRI workflows in order to develop compact, low-parameter models without a significant drop in performance. A combination of the attention-based feature distillation and imitation loss is used to demonstrate the effectiveness of this application on the MRI reconstruction architecture, DC-CNN. Extensive experiments using Cardiac, Brain, and Knee MRI datasets for 4x, 5x and 8x accelerations are made. For the Knee dataset, the student network achieves 65% parameters reduction, 2x faster CPU running time, and 1.5x faster GPU running time with a limited drop in the accuracy. The proposed solution is compared with other feature distillation methods, includung an ablative study to understand the significance of the different components. Adequate comparison with the state-of-the-art methods is proposed in the experimental section. The proposed method obtains the lower error in comparison to other methods when parameters of the model are reduced. An ablation-study is also made to show the improvement of different components of the pipeline. -The proposed pipeline is a combination of existing approaches [(Saputra et al., 2019)+(Komodakis and Zagoruyko, 2017)] -I am not very convinced that in a medical application it is acceptable to drop accuracy to save some computational cost and model memory. However, parameters reduction can be useful also for other aspects such as to obtain a better generalization of the model. I would suggest the authors to make their motivations for the proposed application stronger. The proposed approach shows promising results. An adequate experimental section highlights the obtained improvements with respect to the other methods, and an ablation study to understand the contribution of each component. The motivations of using the proposed approach in a medical context to allow drop in accuracy to save computational cost and memory are not convincing.""",3,1 midl20_120_3,"""This paper proposed knowledge distillation (KD) framework for memory-efficient models without significant drop of performance. The authors also introduced attention-based feature distillation and imitation loss. Experimental results for MRI reconstruction and super-resolution demonstrate that the proposed KD framework provides significant improvement over the method without KD. The authors applied novel knowledge distillation framework to the MRI reconstruction. To better learn the intermediate representation from teacher, attention-based feature distillation was proposed. Imitation loss function is further proposed for a regularizer. - Although the concept of knowledge distillation is very interesting, one limitation is that student network is highly dependent on the teacher network. If the performance of teacher network is poor, how does the student network overcome the teacher? - Lack of details on dataset: How is the images cropped to 150x150 for ACDC dataset? - Image resolution is poor, for example, figure 3. Please use more high resolution without bounding box. The paper is well written and easy to follow. The authors explained the proposed methods and demonstrated the effectiveness with extensive experiments. Overall, this is an interesting and solid approach.""",3,1 midl20_120_4,"""The paper shows the method of achieving knowledge distillation on two previous networks, i.e., DC-CNN and VDSR. Its experiments show that the proposed method delivers state-of-the-art performance on MRI reconstruction and super-resolution. The knowledge distillation method is significant in decreasing the cost of computation and storage thanks to its ability of compressing models. The paper provides good validation work regarding to the effectiveness of the proposed framework on different networks and datasets. They showed the performance in MRI reconstruction using cardiac, brain, and knee MRI datasets. They also evaluated the method in MRI super-resolution. 1. The authors did not provide the performance under different compression rates. In the ablation study, the paper studies the performance under different settings of loss functions, but does not provides the performance under different compression rates. For example, the number of convolutional layers in student DC-CNN is set to be 3, but it could be set to other values from 1 to 4. The performance under high compression rates is more powerful to verify the ability of compressing models. 2. The presentation of model complexity is weak. The significance of knowledge distillation methods is model compression, so the number of parameters and the run time of compared models should be clearly presented. The knowledge distillation method is useful in compressing models. The methodological novelty is limited. The method has been applied to several medical image reconstruction, but the restoration validation work is limited.""",3,1 midl20_121_1,"""The authors present results using a ResNet 101 classification model to classify high risk prostate cancer from digital pathology using a mutli-scale data representation (stacking image patches at 10x, 5x, and 2.5x magnification into a single input). Strengths: - The paper is well written and clinical problem is clearly defined. - The use of weakly annotated labels is interesting (but lacks details see weaknesses). - Classification results are shown for 10 experimental conditions (10x, 5x, and 2.5x resolution each using 3 image parameterizations, and the multi-scale approach) along with 3 ensembling methods to demonstrate the contributions of each individual imaging component are solid. Weaknesses: - While the use of weakly annotated labels is very interesting, there are no details as to how this information was utilized in the training. This has potential to be innovative, but unfortunately it is not presented here. - Multi-scale imaging (referred to as multi-view here) is not necessarily novel. - Limited testing on a single patients data makes it challenging to draw conclusions about how the results generalize over the entire set. Overall, this is an interesting evaluation and application of multi-scale representation, but enthusiasm is limited due to limited testing on a single subject. """,2,0 midl20_121_2,"""In this manuscript the authors presents a multi-view framework to classify architectural structures in digitized pancreatic tissue samples. Cons: -The paper is not very well written, it's hard to follow the story. -It's not clear what is the motivation and ultimate goal of the study. -The experiment design is not adequate. Authors should have compared to at least one other method. Pros: -The idea of multi-view framework is nice and results are promising I suggest authors to add more comparison (maybe additional data too) and a better motivation/goal statements.""",2,0 midl20_121_3,"""In the paper, the authors presented a multi-resolution approach to analyses histopathological slides, that mimic pathologists' work. Analysis of large scale data is challenging and new methods are necessary. The presented research goal is interesting. However, paper in the current form needs improvement to be useful for a science community. The title of the paper is confusing. The presented approach was evaluated only on one data type (prostate cancer and evaluation was done for only one slide), by this it is difficult to estimate how good is proposed approach to other tasks. The proposed method was evaluated on a single slide, as a result, the achieved results can be overoptimistic and biased. The method description and motivation to apply selected techniques are not clear. - What type of a normalization technique was applied? - Why color deconvolution method was applied? DL models should be able to learn correlations between colors. - How images were converted to BW color scale? - FiG.1.C is not clear. Three experiments can be distinguished on the figure, however, in the text was mentioned 10 experiments. - In the work, the authors applied data augmentation with label smoothing. It is not clear what does it mean. It should be explained or citations should be added. In the result section, it is missing a comparison with a basic approach. Results for the 10x models are better for the test set only in a case of accuracy, in the term of AUC - 5x models are better. In the paper occur formatting errors in citations. """,2,0 midl20_121_4,"""The quality and clarity of the work are good. Pros: Multiple channels of the input images are aggregated in classification. The paper is well organized and well written. Multi-view solution achieves better performance on testing data. Cons: The native channel aggregation is used in this work. The process of ""decon"" is not clearly described.""",3,0 midl20_122_1,"""The paper presents a simple hybrid strategy to deal with the lack of ground truth data. This is applied to a slice-wise tissue segmentation of fetal brain MRI where expert segmentations are missing. The strategy starts by training with the results (in part incorrect, but overall passing visual inspection) from an automatic segmentation method, followed by refining with manual corrections of the segmentations. CNN Segmentation was performed with an existing solution called DeepMedic (2D slice-wise segmentation). - The paper shows that pretraining with automatic segmentations lead to great results that can be improved with limited ground truth data - The final results (from a slice-wise evaluation viewpoint) seem okay (but are only evaluated on slices from the middle of the volume) - 2D: The strategy is based for a 2D segmentation and it would be not straightforward to a 3D strategy as the whole volume would need to be corrected, which is extremely time-consuming. - 2D segmentations are outdated and should be avoided as they perform badly for the full 3D volume (and only perform well in the middle of the volume). - Evaluation is done only in the middle of the volume and not throughout. - This paper basically proposes pretraining with automatic segmentation results, which is a rather straightforward and non-novel type of pretraining, followed by manual segmentations. - There is little methodological novelty here, and there is relatively little new here - Simple strategy for combining pretraining and updating with manual segmentations, no real novelty - the employed segmentation is uses outdated slice-wise methodology and the corresponding strategy does not easily transfer to volume-wise analysis, as the whole volume would need to be manually corrected""",2,1 midl20_122_2,"""Providing a reliable segmentation of the major intracranial compartments in anatomical MRI of fetus is difficult because the white matter is, dependent on gestation week and location in the brain, not yet myelinated, thus, inhomogeneous. This submission describes an approach that uses a convolutional neural network (CNN) to segment the intracranial compartment into 9 tissue types. The text focuses on the description of the optimization and evaluation of the method's performance. The topic of this manuscript is within the scope of the conference and of potential interest to its audience. Text is straightforward to understand and without major errors, except where noted below. A considerable improvement for a difficult problem is described here. There are a few, relatively minor issues with the current version that should carefully be reviewed. This submission is too long, even for a journal submission, but is missing relevant detail in some sections. Details are given below. Although no methodological advance is presented here, this manuscript describes a considerable improvement for a difficult problem. In terms of depth (and well, length) this submission is rather a journal publication.""",4,1 midl20_122_3,"""The authors investigate if it is possible to train a CNN for fetal brain segmentation using minimal manual labeling. The authors first train a CNN using labels generated with another program called Draw-EM. The output of this network is manually refined for 283 2D slices. It is unclear to me if this refined data is used to retrain the CNN and how. In addition, the data separation is so complex that the paper is hard to follow. Although the qualitative results shown in Figure 3, 4 and 5 look promising, no qualitative results are given and no comparison with other work is made. This makes it impossible to determine if this approach is valid. The paper has a very nice overview of related work and the authors show promising results. It is clear that a lot of work has gone into this paper. Unfortunately, the data separation makes it almost impossible to understand what the authors did and the experiments do not address the problem. There are three main weaknesses: (1) Very complex data separation. I had to write out what the authors did because it was very hard to understand: 249 T2-weighted scans split into five(!) sets. 249 was split into 151 + 98. 151 were computed using Draw-EM, 59 failed and 92 passed. These 92 passed were split into 39 train(1), 10 validation(2) and 43 named held-out-c (3). The 98 scans were split into 12 scans names held-out-b (4) and 86 scans named held-out-A (5). This is impossible to follow. The authors should just define a training, validation and test set. Use the training set to train, the validation set to validate, let the expert adjust the result on 283 2D slices in the training set, retrain the network and evaluate on the test set. (2) The authors train two networks (SN1 and SN2), all the details of these networks are not reported in the paper. In addition, it is unclear to me why SN1 predicts eight classes, while SN2 predicts two classes. (3) The authors only show qualitative results. The expert should also annotate 2D slices in the test set, so the authors can show that their CNN outperforms the Draw-EM system and the CNN that was only trained using the Draw-EM as input. It remains unclear if this method improves results. Although the authors clearly did a lot of work, the performed experiments are not adequate to answer the research question. MIDL is a deep learning conference, but the paper does not describe which CNN is used and how it was trained.""",2,1 midl20_123_1,"""This work tackles a challenging problem: inhale-to-exhale CT lung registration. Based on the work of Heinrich and FlowNet, the authors propose to use a correlation layer to generate a displacement field. Pros: - the paper is clear and easy to read - reducing the computation cost by using keypoints is an interesting approach - evaluation shows partial but promising results Cons: - the authors resort to the Foerstner interest operator. How many keypoints are required? - there is no comparison with Heinrich's method or other recent deep learning approaches""",3,0 midl20_123_2,"""This work primarily builds on the idea introduced in Heinrich (2019), namely PDDNet, which itself borrows the correlation layer from FlowNet. The main difference is the use of a non-learned keypoint extractor in order to keep the computational burden in check, the feature dimensionality reduction, and a different graph-based smoothness regularization approach due to the non-regular sampling of keypoints. For feature dimensionality reduction, a simple PCA was employed for this proof-of-concept. Given the closely related work of PDDNet, it is surprising that authors have not compared their adaptation to this method! I would consider it mandatory to demonstrate the benefit of regular vs. non-regular control point distribution. No information at all is provided on how the displacement embeddings are used to predict the final displacement fields. It is very promising work, and I am looking forward to a future conference / journal article on the approach.""",2,0 midl20_123_3,"""Different from existing standard encoder-decoder networks, it proposes an explicit model for correlating fixed (inhale) and moving (exhale) image features. The features are extracted at sparse key points and transformed into a compact representation by CNN. Then a displacement map is calculated to measure their dissimilarity and further represented as displacement embedding. I think the performance of the proposed method could be affected by the number of key points and the size of the displace locations, although it is understandable not to include all the details due to the page limit. Also, there might be a possibility of estimating unnaturally aggressive deformations since they are estimated based on key points only. In the experiment shown in Table 1, it shows improved performance compared to other deep learning-based methods, including VoxelMorph and less obviously improved result compared to other algorithms for large deformations. """,3,0 midl20_123_4,"""Summary The authors work on on inhale-to-exhale CT lung registration. They propose a novel approach where a the final registration reconstructed from a (low-dimensional) embedding of the displacement space. Experiments are performed on the DIR-Lab 4D-CT and DIR-Lab COPD data set, including a comparison with five other deep learning based approaches. On the DIR-Lab 4D-CT the proposed method reaches average landmark error of 1.97+-1.42mm. Strengths The paper is well structured and easy to read. Evaluation is performed on two established data sets. Figure 1 gives a good impression of the used network without getting lost in details. The core idea of their work, to explicitly model differences of fixed and moving image features by means of displacement maps and to predict the final registration from an (low-dimensional) embedding of these, is novel. Weaknesses >Method: The proposed method misses the two possibilities to introduce really new and refreshing components by explicitly using the the geometric structure of the keypoint graph for e.g. some graph learning and something more interesting than PCA to achieve the low dimensional embedding. These two points are left for future work. >Validation: The validation seems to me a little selective, since I found three recent papers that employ deep learning methods to perform pulmonary registration on the DIR-Lab 4D-CT data set and that claim a higher accuracy as the proposed one: 1.86+-2.12mm Sokooti, Hessam, et al. ""3D Convolutional Neural Networks Image Registration Based on Efficient Supervised Learning from Artificial Deformations."" arXiv preprint arXiv:1908.10235 (2019). 1.66+-1.44mm Jiang, Zhuoran, et al. ""A multi-scale framework with unsupervised joint training of convolutional neural networks for pulmonary deformable image registration."" Physics in Medicine & Biology 65.1 (2020): 015011. 1.59+-1.58mm Fu, Yabo, et al. ""LungRegNet: an unsupervised deformable image registration method for 4DCT lung."" Medical Physics (2020). If one would consider additionally the non deep learning methods (what is pretty natural from an applicational point of view) one would find way more works that deliver a higher accuracy than the proposed method. Justification Of Rating Although the basic idea itself is interesting I don't feel this work is ready to be published now. There is room left for methodological developments that, if carried out, might make this work an interesting contribution to the body of knowledge in the field of pulmonary registration. The comparison does not cover all the recent deep learning based papers. Since the the non deep learning methods still deliver superior results (Not to talk about the fact that Rhaak et al. 2017 already reached the accuracy of another human observer for the DIR-Lab COPD data.), I feel the aspect of computational performance and simplicity should be carried out in more details (and with numbers). All together I feel this work is not ready yet and I thus opt for weak reject.""",2,0 midl20_124_1,"""The authors use the feature Disentanglement (FeaD-GAN) technique for generating synthetic images and re-sample from a pseudo-larger data distribution to generate synthetic images from limited data. This is a good idea to overcome the lack and unbalance data. Three experiments were considered in this study are: 1) To evaluate the presence of mutation biomarkers in MR images, 2) To characterize macroscopic features, 3)To evaluate the reproducibility of biomarkers. The classifier accuracy of 19/20 co-gain status is good but not validated to be significant. Authors used conventional features like the texture and shape features with FeaD-GAN framework. The results are achieved the highest result when they considered the texture, shape, and location of the tumor. The authors applied the classifier model just on FLAIR while the TCIA datasets consist of four MRI sequences. Many minor typos should be avoided. and all the symbols should be defined. The baseline should be not only CNN but also the texture (LoG, GLCM,..etc.) and shape features as in the radiomic or radiogenomics analysis. Application is needed in the medical field. In addition, this paper addresses the real challenges like lack and unbalance data and how the proposed work can avoid these issues using the advantage of deep learning with texture and shape features.""",3,1 midl20_124_2,"""This paper utilizes brain tumor MRI images to predict molecular biomarkers utilizing deep learning. The authors attempt to address two major issues common in such tasks: Imbalanced dataset and lack of training data. Predicting such biomarkers utilizing MRI imaging can improve patient management. Both deep learning and and texture features are utilized. A novel approach is used toward data augmentation. The paper utilizes a publicly available dataset in addition to a private dataset. Multiple data representation approach is attempted comparing classical texture features and deep learning classification schemes. Accuracy is not a good metric to use when dealing with imbalanced datasets. The authors should consider reporting weighted F1, precision and recall. Typo in Introduction, Line 8: precise approach. the concurrent. It will be ideal to compare the method proposed with classical augmentation approaches, like rotation, scaling, and translation. How the tumor and skull image was obtained. The metrics utilized to prove the performance of the system are not adequate. The authors need to provide additional information with respect to the methodologies utilized in all the steps of this paper. """,2,1 midl20_124_3,"""The paper proposes a GAN-based model that can be trained using limited data. The core idea hinges on (1) a supervised disentanglement that explicitly capture the shape and image features separately and (2) a data-driven empirical distribution for the latent space that reduces the chances of mode collapse. Wasserstein distance is used for training stability. The proposed model is motivated by and evaluated within the context of detecting imaging biomarkers for genetic mutations. - Embedding image and shape features into the latent space from which random samples are given to the generator is a reasonable idea to avoid mode collapse. - Synthesized images (augmented data) are used to training classifiers, obtaining comparable performance with classifiers trained on real data. - Data-driven embedding to avoid mode collapse has been proposed in GANs literature, e.g. BourGAN (pseudo-url) - Unjustified design choices. It is not clear why GCNNs are needed vs regular CNNs. - The model relies on significant manual effort for tumor delineation to provide supervised disentanglement. - Missing details, e.g. apparent size used for training, details for the integration module, how pseudohealthy data is generated, oversampling approach, training procedure. - Missing ablation experiments. The paper presents an interesting idea of data augmentation under limited data scenarios that is based on supervised disentanglement. Experimental results demonstrate that the trained model can reproduce imaging biomarkers relevant to the gene mutation. The paper is missing details and ablation experiments to study the impact of the shape/image disentanglement.""",2,1 midl20_125_1,"""- Quality: Interesting problem of predicting malignancy from longitudinal scans. High-quality cohort of images, though small-sized. Propose a ""two-stream 3D convolutional neural network (TS-3DCNN)"". Authors call it ""sibling networks"" rather than ""siamese"" architectures while sounding very similar. Strong results on F-score improvement: ""This model outperformed by 9% and 12% the F1-score obtained by the best models using single time-point datasets"". A confusion matrix would have been really informative, along with discussion on cases where prediction failed. - Clarity: ""pair of patches of 32x32x32, cropped around the center of the annotated nodules at both time-points"" & ""the radiologists detected and matched the most relevant nodule and annotated its malignancy"": while this is a quite impressive cohort and annotation work, sampling a single nodule per scan remains limited. Any lower threshold on size of nodule, as in [1]? ""The nodules were labelled as malignant if they had a positive cancer biopsy, and benign if they did not have a significant change in their structure, density or morphology during 2 years or more."": these are not exclusive conditions. What about a significant change and surgery but no confirmation of cancer via biopsy? Or vice versa, no change over 2 years but later on cancer? - Originality: This paper is very similar to [1] (some figures are identical) while the final task differs (matching distance versus nodule malignancy). [1] Rafael-Palou X, Aubanell A, Bonavita I, Ceresa M, Piella G, Ribas V, Ballester M. Re-Identification and Growth Detection of Pulmonary Nodules without Image Registration Using 3D Siamese Neural Networks. arXiv preprint arXiv:1912.10525. 2019 Dec 22.) """,3,0 midl20_125_2,"""The authors proposed to utilize longitudinal scans for nodule malignancy classification. The proposed method is essentially applying the same backbone on two longitudinal CT scans and merge the feature vector for classification. The method was applied on an in-house dataset, and claimed not comparable with other dataset. Its meaningful to bring attention to longitudinal scans. The dataset is well-constructed. However, the two-stream concept is not very solid in my opinion. For example, for the two-stream action recognition paper referenced, the two streams are spatial and temporal streams. Here its merely the same feature extractor and classification on the concatenated feature vector. The experimental comparison is not very meaningful. The main reading from the result is that longitudinal data is better than cross-sectional which is self-evident. Detailed comments: 1) Can the authors comment on the gap between the training/validation F1 and the test F1. It seems the better performance of TS-3DCNN comes from better generalization capability. 2) Which blocks from CNN are finally used, or are all the blocks used as the figure suggests? """,2,0 midl20_125_3,"""This paper proposes a pulmonary nodule malignancy classification based on the temporal evolution of 3D CT scans analyzed by 3D CNNs. It is an interesting idea and the quality is overall rather good for an abstract paper. Some points to address are listed in the following: The early stopping is not clear. Specify that it is on the validation set if so, and clarify these points: number of epochs was set to 150, early stopping to 10 epochs Why is this clipping used? It is not clear whether T1 and T2 is available for all cases (mostly) In Table 1, bold results are not always the best, this is very misleading. It is strange that the T1, T2 generalize well to the validation set but not to the test. Can you comment? ... obtained an F1-score of 0.68 -> 0.686? """,3,0 midl20_125_4,"""This paper is well written and easy to follow. Malignancy estimation of pulmonary nodules is a relevant problem and indeed, growth is the most important risk predictor for cancer so using multiple scans is very relevant. In this paper, the authors proposed a two-stream CNN which takes two 32x32x32 volumes and has a classification head on top to produce a nodule malignancy. The authors nicely outlined how they trained the model and how the data was acquired. In the end, they show a substantial performance improvement over a single timepoint model. Pros: - Dataset with good reference standard set by pathology or 2 years of follow-up - Good comparison with single timepoint models Cons: - Small dataset (30% of 161 cases means only 48 cases in the test set, of which approx 2/3 is malignant) - It is not reported what the average size of the lesions at T1 and T2, and what the range and median of times between the two scans is. This is important information. - No comparison with human performance on this dataset. - No ROC analysis.""",3,0 midl20_126_1,"""The method is reasonable, but the paper is very short/vague and the method not tested with real data. Short papers are allowed to use up to 3 pages excluding references, but this paper only used a fraction of that space. The paper could have been much stronger if it used the full space allotment. I wish the paper would have spent more time explaining why Eq. (1) was chosen given many other possibilities, since the form of Eq. (1) is not intuitive. Since this is a standard superresolution problem, so the lack of any comparison against other superresolution methods is an important omission.""",2,0 midl20_126_2,"""This paper proposed to use a null-space network to remove ringing artefacts in PET image reconstruction. Overall, the novelty seems to be limited because the major novelty of null-space network is from a reference paper. Moreover, this reconstruction network is learned and tested on phantoms. Can this learned network be extensible to real PET imaging? Whether there is way to collect and train the network on real PET image? Overall, the novelty in methodology seems to be limited and the experimental evaluations are not convincing. I suggest to more deeply investigate along this research, with improved novelties and more extensive experiments.""",2,0 midl20_126_3,"""The short abstract is very interesting. Data consistency is known to be a fundamental building block in many image reconstruction models. It is not mentioned in the title, but the method is applied to PET images. There was no extensive validation of the approach and it is ok in a short abstract scope, but the results in the test set seem to be ~50% worse (NRMSE) than the results on the train set, which raises some model generalisability concerns. """,3,0 midl20_126_4,"""The authors present in their submitted short paper manuscript an improved approach to recover/interpolate resolution (resolution modelling, RM) in PET imaging after maximum likelihood expectation maximisation (MLEM) reconstruction. The main idea is to regularize RM by a null space network to reduce reconstruction related Gibbs ringing artefacts. Overall, the idea of a regularizer network is timely and worth investigation. Digital phantom data (MRI phantom) were transformed to PET-like phantoms. In line to these training data, the authors demonstrated the potential of their approach on a digital test phantom. It is claimed that similar results could be achieved in real PET data, however, no reconstruction results were shown nor quantitatively evalulated. Main critics (below) are formal and address the missing links/context to other work and current scientific research. Gain: - proof of concept and introduction of a regularized approach which also seems to generalize well - training completely on digital phantom data (synthesized from MRI data) overcomes problems and limitations with data availability and still seems to perform well - translation to real data expected (as the authors claim) Shortcoming: - overall, the manuscript has some shortcomings in the methods section. It has to be acknowledged, that this is a short paper submission. However, a discussion is not done at all and therefore, the authors do not put their results into context or point out potential or limitations. - Equations are given in the most compressed manner, which is acceptable, but may not allow reproduction of the shown results. In particular, because training and test data are publicly available, this could be of interest with respect to open/shared science. """,2,0 midl20_127_1,"""This is short paper that targets integration of adjacent slices into the learning process for intra-cranial hemorrhage classification. The approach uses a multi-slice (slab) network followed by a single plane fusion network. Results on the RSNA challenge are reasonable (top 4%), but no direct numeric comparison is made with the leading methods. The method performs reasonably versus a baseline network on a private dataset. The method is simple and easy to follow. The approach could be readily combined with other technologies. The results perform well against the chosen baseline method. The conclusion is well supported by the data. The paper does not make a direct comparison against the RNSA leaderboard show that the technique would augment the highest ranked non-ensemble method from the board. The level of novelty is not clear. Multiple groups have used multi-slice slab learning followed by fusion. It is not cleat that the proposed approach would outperform similar approaches already found in the literature. No assessment of variance was performed. No statistical assessment / modeling was performed. The limited assessment of variability and lack of sensitivity / ablation experiments renders the generalizability of the work difficult to understand in context. The lack of a leading comparison algorithms reduces confidence in the strength of the results. """,1,0 midl20_127_2,"""This manuscript describes a two-stage training scheme. The first stage uses 2D slices of CT scans to train a CNN classifier. The second stage stacks a block of 7 consecutive slices from stage 1 and trains another CNN for final prediction. The proposed method was demonstrated on RSNA ICH dataset and also CQ500 dataset, and achieved good results. The paper overall is well written. The method is not novel but has been demonstrated with good results. All the details of the algorithm have been clearly described.""",3,0 midl20_127_3,"""This paper proposed a two-stage method, which first sample each slice and their neighbor for a coarse per-slice classification, then another network is used to refine the classification of the central slice using the output descriptors of the whole group. 1. The result shows around 2% increase in AUC and improves all predicted labels. 2. Method description is clear. 3. Novelty is only to add a refinement 3-layer CNN for the output of the first step, which is not quite enough. 4. The second step uses CNN instead of ANN, so that only ""neighbor"" labels' relationship is considered. It would be better to show the comparison to ANN in the ablation study. """,3,0 midl20_127_4,"""In this paper, the authors propose a two-stage deep learning method for producing slice-wise preditions for intracranial hemorrhage detetion. The paper is well written and easy to follow. The authors first train a 2D convolutional network to perform classification per slice using transfer learning. During this training process, the 6 adjacent slices are part of the same batch. Subsequently, a second CNN takes the predictions for each class for the 7 slices to produce a 7x6x1 tensor which is used to produce an updated prediction for the center slice. The method is validated on the RSNA Hemorrhage challenge and the CQ500 dataset. A score of 0.05341 on the RSNA challenge would have ranked 41 in the challenge. Pro's: - Trained and validated on a large set. Also validated on an external dataset. - Compared a publicly available benchmark. Cons: - I think this paper has little novelty. - A logical comparison would be to compare this approach with pseudo 3D approaches where multiple input slices are fed as extra channels. This is a common method which adds little computational overhead. A comparison with this baseline would have been very relevant. - A disadvantage of this approach is that it is not trained end-to-end. Why is this not possible? I do not see why not, and it is not explained in the paper. The second stage could also be added using 1x1x7 convolutions? Also, the authors use batches of 16x7 so I would expect that it would fit in memory if the batch size is reduced. - It would have been good to report the performance without the second stage to see how much that adds to the performance. - From the paper, I understand that the authors only trained with the public training set of the RSNA challenge. Furthermore, performance is measured on the private test set. So, the public testing set was not used at all?""",1,0 midl20_128_1,"""The author proposed and compared a number of model architectures for incorporating the lateral view information with the posteroanterior features. Experiments in the paper show there is considerable improvement by adding the lateral view. It also suggests the auxiliary loss topology (with curriculum learning) is a better approach than other concatenation methods. Sufficient experiments and convincing results. The auxiliary loss structure for combining PA and L is interesting and seems to have a good improvement. Quantitative comparison for the improvement gain from L over different architectures Not much novel modifications for the proposed AuxLoss. Performance gain seems just from curriculum learning and performance of all PA+L combined architectures without CL seem roughly the same. Need more evidence to support the AuxLoss. For the sufficient experiments and detailed analysis of the results, I will vote the accept. Methodology may be just incremental but still have some interesting insights. Overall the quality of paper worths a presentation in the conference. """,3,1 midl20_128_2,"""The authors investigate if the use of lateral (L) views increase classification performance in chest x-ray diagnoses with respect to the use of only posterior-anterior (PA) views. The database used is the large PadChest database, with a pre-selection of cases that have both PA and L views. Several network architectures are investigated, from a stacked densenet to a two-path convolutional network with a joint auxiliary loss. Further experimentation comparing the use of PA and L views with using twice the length of the dataset but with only PA images is performed. Results show that: - When using the same number of training points, the use of both PA and L increase the performance irrespectively of the network employed - When training with twice the data, the performance increases considerably - When training on PA and using PA and L views to classify the performance decreases Experimental validation. Several networks analyzed with repeats. Meta-parameter stability analysis. Two datasets posterior-anterior and lateral views and extended posterior-anterior views. Good references, although the reference for PadChest is not complete. Experimental paper with a low amount of novelty. Of course, adding more views increase diagnosis performance. That is not any surprise. Adding more data also increase performance. No surprise either. Is it better to add more views or more data of the same view? According to their data, DenseNet-PA on the extended dataset obtains the best performance metric. It only requires a larger training data. From a patient perspective, acquiring a single PA view is better than acquiring two views, since it involves lower radiation. The conclusion of this paper to me is that, if there is a system to diagnose disease from chest x-rays, the best option would be to train it with a database that is as large as possible with only PA views. Such is not mentioned in the discussion. The experimentation is solid but it leads us to expected conclusions. Missing clarity. There is a general lack of interest from the conclusions of the paper, which I consider not to be complete. The authors should think if they are trying to solve a medical need (such as the automated diagnosis) or to address the preference of using extended data to an extended dataset with a subset of the data.""",2,1 midl20_128_3,"""The authors consider the problem: are the lateral view images as important as PA view images in chest X-ray datasets for deep learning models. The authors performed extensive comparisons with several different models. The authors conclude that "" it appears a well-tuned PA-only model is competitive with a well-tuned joint model"". Pros: This paper performs a detailed comparison between different models, and the ablation study is also complete and convincing. The authors carefully designed experiments to determine the effect of images from two views, and plotted the curve of AUC varying with the proportion of paired lateral images. Cons: This work appears to be limited to empirical evaluations and comparisons to me. Although the authors proposed an ""Auxloss CL"" model, which is not in the literature. But the idea of combing several different losses (or termed auxiliary loss) has long been existing in the literature such as image segmentation (e.g. PSPNet, Context Encoding, and some earlier works). Overall, I think the empirical evaluation is quite extensive and solid, but the novelty is limited. However, I'm not sure if such novelty is a hard requirement for MIDL. Therefore, I tend to give a weak-accept rating.""",3,1 midl20_129_1,"""It is a validation paper, yet without complete validation. The authors use an existing method on Mitochondria Detection for Connectomics on three datasets including a public dataset. Then they compare the detection performance with those from other literature. They claim the importance of real-time detection, yet did not do a good comparison on speed. The authors perform the proposed method on three datasets, including one public dataset. They make their dataset and code publicly available. They collect detection performance from several literature. It is a validation paper, which requires , of course, good validation. The Table 3 shows that the performance of proposed method is worse than comparison methods. But the authors claim that speed is very important for this application, and the proposed method is very quick. This claim is only valid if the authors provide enough validation on speed, which is not provided. Table 4 is supposed to provide comparison on timing. But the authors only compare the proposed method with one comparison method. If the authors could run other comparison methods by themselves and report time cost, this paper would be acceptable. The proposed method did not give a very good detection performance. But the authors claim that speed is very important for this application, and the proposed method is very quick. This claim is only valid if the authors provide enough validation on speed, which is not provided. If the authors could run other comparison methods by themselves and report time cost, this paper would be acceptable.""",2,1 midl20_129_2,"""The authors propose a method to segment mitochondria in 3D electron microscopy images. They suggest using a 2D U-net with a reduced number of filters in each convolutional layer to process each of the slices in the 3D volume and hence, speed up the processing time. Additionally, they avoid image alignment between the slices and thanks to this, they achieve a real-time segmentation. While high-resolution connectomics images are usually downsampled, the authors manage to skip this step and even though, get low processing-times. They validate their approach using three publicly available datasets which have also been used in very similar works. The code, data and a user interface are freely available but hidden for the review due to the blind policy. In general, the document is well written and the methods are well described, which supports its reproducibility. The results obtained for both the accuracy of the segmentation and the processing time are quite good. The authors curate part of the publicly available datasets for connectomics and they provide a new version of it, which is very useful and important for the scientific community. I would highlight the fact that the entire code and data are freely available as open-source software, which is not always the case. It seems that they also have some annotation framework available (supplementary Figure 1), which if true, can be also very useful. The weakest part of this work is that none of the problems to solve nor the proposed approach are innovative. The reduced U-net consists of a network with half the number of filters in each convolutional layer. Besides, the suggested 3D interpolation was already used in Oztel et al., 2017. Indeed the accuracy results of the latter are better, so then where is the improvement in the results reported by the authors? The architecture proposed in Oztel et al., 2017, seems to be much simpler/smaller than the reduced U-Net used here. However, the accuracy measures are better, and the processing time? Also, what can the author say about this? Could it be possible to reduce even more the number of parameters in the U-Net? Besides, I miss some more recent references such as Chi Xiao et al., Front. Neuroanat. 2018 (pseudo-url) where they also train a reduced U-Net with some residual layers and skip connections. Is this approach more accurate but maybe more expensive in terms of memory and time? In motivation, it is not clear why it is important to get a method faster than the acquisition speed of modern electron microscopes. Could it be possible to integrate this processing into their software? What would be the benefits of it? In terms of methodology, the work is not novel, but the authors are providing curated datasets and open-source software that will be used by the community in the future. With little effort during the rebuttal, the work could be a good reference.""",3,1 midl20_129_3,"""The authors proposed a modified encoder-decoder (U-net) based architecture for the segmentation of Mitochondria. The proposed architecture has a significantly reduced number of learnable parameters than the original U-Net. Besides, the authors have re-annotated the standard datasets (Lucchi and Kasthuri) by removing the boundary inconsistencies and incorrect classification. The proposed method is evaluated on these updated datasets and obtained a Jaccard index of 0.90 Given the limitations of the existing standard dataset (Lucchi and Kasthuri), there was a need to improve the ground truth fo these datasets. The authors have addressed this issue and re-annotated these datasets. The procedure for re-annotating (and verification) by experts has been very well documented. Besides, a modified U-net architecture is presented and evaluated on this re-annotated dataset. It is shown that the proposed architecture performs competitively with the existing state of the art methods. In my opinion, it would be better to explain in detail the proposed encoder-decoder architecture for segmentation. A detailed figure of the architecture (similar to the one provided in the supplementary materials) would be sufficient. In addition, the following questions need to be answered: 1) It is claimed that reducing the number of learnable parameters from 31 million to 1,958,533 has no impact on precision. This statement should be explained either theoretically or experimentally. 2) In the proposed architecture, the output segmentation map is of the same size as the input map. How is this achieved? By padding? 3) There isn't any 1x1 bottleneck layer in the proposed architecture. Any particular reason for eliminating this? In my opinion, one of the major strengths of this work is the re-annotation of the existing dataset, which would further drive the research in detection of mitochondria. Besides, the proposed architecture based on U-net performs competitively with the existing methods and obtains a near real-time performance. """,4,1 midl20_130_1,"""This paper concerns lobe segmentation from Thoracic CT images using deep learning. This is certainly not the first paper on this topic, but the novelty of this approach is the weighted Dice coefficient, which puts special emphasis on the regions near the lobe boundary. This is not completely novel, as something similar was done already in Gerard et al. Also this paper doesn't have a large amount of data. However, as preliminary work, it is a good idea and shows promise.""",3,0 midl20_130_2,"""Summary: Modified dice loss with weights computed based on Euclidean distance transform is proposed to improve segmentation of fissures between lung lobes. The weighted dice loss is presented as a novel contribution and is used to train a 3D Unet. Experiments compare performance to a baseline model and with Unet trained without weighted dice loss. Strengths: + Weighting based on Euclidean distance transform is a useful idea. Weakness: - Presenting the weighting strategy as a novel idea is a bit of a stretch. The paper could be strengthened with more thorough validation and discussion of the improvements due to weighting. - The results are not convincing. While weighting dice loss shows improvement to Unet performance, it is still similar to the baseline method. There is no acknowledgement or further discussion about this. """,2,0 midl20_130_3,"""In the introduction the authors state that none of the existing deep learning approaches to lobe segmentation use explicit knowledge from pulmonary fissures. This is not true, the Gerard and Reinhardt 2019 reference uses pulmonary fissures as an input channel. This method is currently the leader in the LOLA11 challenge which should also be mentioned. It seems the proposed method requires a lung segmentation as a precursor which distinguishes left and right lungs (for cropping input). This needs to be explicitly mentioned. It should also be explained in the methods how this was obtained in this work. During training patches of size 60 are used, however, it is unclear what is done during inference. Are the same patch sizes used? Are non-overlapping patches used? If not, how are patch results merged? Figure 2 shows the ""mean distance to visible fissure"". It is unclear how this is calculated. For this to be calculated the evaluation data ground truth would need to include annotations of just visible fissures, however, based on the description it seems only complete lobe segmentations are available, i.e., all extracted fissures would be extrapolated and include both visible and non-visible fissures.""",3,0 midl20_131_1,"""This paper is well-written and well-organized. The authors applied geometric deep learning based on spline convolutions to the problem to learn the relationship between the functional organization of visual cortex and anatomy. However, theres no comparison with other competing methods, so its hard to tell whether the proposed method is good enough or not. Also, theres no quantitative evaluation.""",3,0 midl20_131_2,"""This paper focuses on the application of a geometric deep learning method (spline-based convolutions, Fey et al., 2018) to be able to predict functional retinotopic maps from purely anatomical features using the HCP retinotopy dataset. Pros: The paper is written clearly, with the motivation well explained. The results of this paper are clearly promising, with the method showing not only good prediction of retinotopic maps, but also correctly predicting individual variations. Cons: The authors do not provide references or mention whether the prediction of retinotopic maps in the HCP retinotopy dataset has been tried before, therefore it is unknown whether similar results have already been achieved or not. There is limited description of the method, however, the authors do provide a clear figure of their network. There is limited validation, and no quantitative results presented in the paper. """,3,0 midl20_131_3,"""In the paper, the authors apply geometric deep learning methods to predict the functional organization of the human visual cortex from MRI data. Curvature data, myeling values, and connectivity of vertices on the cortical surface are passed through spline-based convolution layeres to predict the retinotopic map. The network is tested on data from the HCP dataset. The paper is clearly written, presents an application of geometric deep learning methodology on a medically relevant dataset, with network architecture that seems well-suited for the task. The methodology and application is relevant for the MIDL audience. I believe the paper fits very well the intended focus and scope of a MIDL short paper.""",4,0 midl20_131_4,"""This paper applies the geometric deep learning framework by Fey and Lenssen to retinotopic data from 7T fMRI from the HCP dataset. This is potentially a good application paper. However, there are many limitations. The test results and evaluation are only shown visually and are qualitative. Since the surfaces are all registered, they could simply compute the error between the predicted retinotopic map (continuous scale) and the actual map. Visually the results look reasonable, although the gray markers that the authors have drawn on the retinotopic maps are distracting. They do draw attention to the similarities between the ground truth and the predicted maps, however, they also ignore the differences. The same goes for the polar angle and the eccentricity maps. In both these cases a numerical evaluation would have been better. Since the retinotopic maps have considerable inter-subject variability, more insight on the predicted mapping would be valuable. No information is given about the error evolution and the learning rate. The authors should also provide the mean squared error. The authors provided curvature and myelin values as an input to the network. Although the loss function only included the retinotopic maps, it would be interesting to see if the loss function incorporated the curvature and myelin (capturing the anatomy), would the retinotopic map prediction be improved. This is because retinotopy is also dependent on the anatomy. """,2,0 midl20_132_1,"""The authors aimed to reduce the size of a segmentation model such as U-Net while keeping a large receptive field. A novel block is proposed which consists of maxpool at different scales with a stride of 1, so that down-sampling layers are not needed to get large receptive field for segmentation. Experimental results showed that the proposed method had fewer parameters and lower segmentation performance than U-Net. However, their receptive fields were not calculated for comparison. 1, the idea of using max-pooling at different scales is interesting to achieve large receptive field without introducing extra parameters. 2, Both simulated images and real images were used for experiments. 1, The authors tried to keep large receptive field without using down-sampling and a large set of parameters. It is good to see that the proposed T-Net is much more lightweight than the U-Net. However, do they have similar receptive fields? A comparison of the receptive fields of different methods is missing in the paper. 2, From figure 4, it seems that the receptive field of T-Net is much lower than that of U-Net. Therefore, T-Net has few parameters and smaller receptive field than U-Net, and leads to lower performance than U-Net as shown in Table 2, which does not bring much valuable information. Can the authors elaborate more on T-Net so that it has the same receptive field as U-Net? 3, Without down-sampling, the memory consumption by the network will increase a lot and it requires more computational time for convolution. However, the authors did not comment on such drawbacks of the proposed method. 4, The description in the introduction is incorrect: ""With two layers following each other, the last one can see a 4x4 neighborhood"". Indeed, if you use two 3x3 convolutions one after another, the second will have a receptive field of 5x5. 5, there are other options to reduce the model size, such as reducing the number of channels for U-Net. But such methods were not discussed and compared. 6, The experiments failed to show the advantage of the proposed method. Its performance is much lower than U-Net and the authors did not show the results of the test set of BraTS. Due to the above weakness, even the proposed 'whole transfer block' has some merits, the exact receptive field of the proposed network is not compared with U-Net. Experiments are also no convincing as the performance is much lower than U-Net. """,2,1 midl20_132_2,"""- The authors proposed the idea of ""transfer"" block, with the goal of expanding the receptive field without incurring the cost of increasing number of parameters. - The idea is to aggregate information from pooling with different window sizes. - Experiments have been demonstrated on a synthetic toy dataset, and the brain tumor segmentation (BraTS) datasets. - The paper is well-written, easy to follow. - The methods have been demonstrated to be able to solve a toy problems to the same level as a heavy UNet, which includes orders of magnitude parameters, and show the limitation of dilated convolutions. The proposed idea seems trivial, and indeed, has been used in image segmentation for years, and widely adopted in the segmentation community, check this: Zhao et al. ""Pyramid Scene Parsing Network"", in CVPR2017 (cited over 2000 times) So, I don't see anything novel in this paper. The novelty of this paper is limited. The proposed 'transfer' block indeed solves the problem, but similar idea has been widely adopted for years in image segmentation community, so if this paper targets on a novel computational module, I don't see their contribution.""",2,1 midl20_132_3,"""This paper introduces a convolutional module (transfer block) to extract multi-scale features mainly via parallel maximum pooling operations with different window sizes. The proposed module is mainly tested by replacing the relevant modules in UNet, in a synthetic task and an MR brain tumour segmentation task. The proposed idea of splitting convolutional channels and applying pooling with different sizes to achieve multi-scale feature maps is easy to follow. The research direction is highly relevant to the conference and multi-scale feature analysis is an important research area in computer vision. I have a few major concerns about this paper: - novelty: learning from multi-scale image representation is a widely studied topic, e.g. Chen et al., Person Re-Identification by Deep Learning Multi-Scale Representations, Xie and Tu Holistically-Nested Edge Detection. - the goal of transferability is addressed in neither method description nor experiments. - the experiments are relatively weak: in the synthetic example, both Unet and transfer net yield good results might simply due to the fact the task is too simple. For the brain tumour segmentation evaluations, the proposed method's performance is much worse than a UNet, and reporting only validation result but not test set performance is not convincing. - additionally, there are a few technical details need to be clarified: -- why does this paper only study max pooling instead of other non-trainable local operations? -- what's the motivation of keeping the dimensions the same in the block (km==n ) the paper lacks of novelty. the experimental setup for the synthetic dataset is probably too simple to distinguish good and bad multi-scale deep learning architectures. the brain tumour segmentation experiments and results are not convincing as the paper only reports validation scores and doesn't verify the model's generalisation ability on the test set.""",1,1 midl20_132_4,"""The authors propose the transfer block that consists of two convolution layers and one transfer layer. The transfer layer applies, stride one windowed maximum pooling on a set of feature maps. On the remaining feature maps, a different windowed, stride one windowed maximum pooling is applied. This would result in input sized final output feature map. The transfer layer increases the receptive field and avoids pyramidal feature extraction. 1. The paper proposes a non-parametric way to increase the receptive field to the size of the original image within one layer of a CNN. The method is validated on a synthetic and BraTS 2017 dataset. 2. The authors compare the transfer block with two versions of the UNet and a very simple baseline. 3. The authors compare the stability of the training procedure of the proposed architecture with the baseline. 4. The potential of the work is promising and further evaluation of larger practical applications would be very interesting. 1. How does the model scale with the size of the input interns of GPU memory consumption? Does the model have a similar parameter/performance when trained on the original image resolution of BraTS? 2. What is the speed up achieved using the proposed transfer block? The paper presents a proof of concept transfer block to reduce the network parameter. The results reported are promising and further evaluation over larger datasets would aid in highlighting the significance of the work.""",4,1 midl20_133_1,"""The paper presents a novel approach for simultaneous registration and segmentation of weekly CT scans for extracting organ at risk structures for prostate cancer. The method is written very well with reasonable experiments and results. Accurate segmentation of multiple OAR structures is essential for adaptive radiotherapy and this work if validated has application to several problems in radiation therapy. A few things that could be improved and explained better are: 1. What was the reason to use a NCC metric. Its slow to compute and can be problematic when the images are far away? 2. Maybe I missed this, A new approach that combines segmentation and registration. Use of cross-stitch network for this problem is novel. The paper is written reasonably well. The results seem to be improve over previous results. The number of cases used for testing are few. While this method is supposed to present an approach for longitudinal registration and segmentation, results for longitudinal registration/segmentation as the organs deform and change over time is not shown. Also, comparison methods don't use deep learning registration only methods like Quicksilver - Yang et.al. Some details of the methods and reason for choice of architecture, ablation tests are missing. Ex. what is the patch size used? Why use leaky ReLU? What is the rationale. Authors say, they needed 4 cross-stitch units. The original Misra paper showed the importance of properly initializing the individual networks, studied the impact of learning rates as well as the initialization of the alpha values for cross-stitch. None of these seemed to have been studied here or at least were not reported. The only thing the authors talk about is that the number of cross-stitch units mattered the most. Also, the authors talk about deep supervision but do not report on the effect of deep supervision or why it was needed. Ablation studies to parse out the impact of the various losses are needed. Similarly, why NCC? Why not a better metric like EMD to compare patches? NCC is also slow and not necessarily accurate as EMD especially when the distributions are far away. Finally, the planning CTs are quite a bit different from weekly CTs as the latter are often low-dose CTs used just for positioning. How well does NCC work here? The paper presents a new approach for simultaneous registration and segmentation. This method is also applied to a significant clinical problem of weekly normal organ segmentation applied to radiotherapy. The methods are described reasonably well, albeit additional details should be included to make the paper more understandable. Furthermore, the results look convincing and clearly show improvements over some of the previous methods. However, deep learning registration only methods comparison would be more reasonable than Elastix registration. Ablation experiments are also needed for more comprehensive reporting of results. But this is not a journal paper, so some leeway in this regard could be given as long as rationale for the network design is presented. """,3,1 midl20_133_2,"""This paper investigates the joint learning of segmentation and registration tasks in the context of image-guided radiotherapy using daily prostate CT images. While for structures whose shape remains relatively close to the planning scan, and in regions of low contrast, registration performs best, segmentation can better adapt to changes in anatomy between visits (e.g. the bladder). This motivates the authors to leverage the strengths of both tasks. Therefore, it is proposed to employ a strategy that was originally proposed for multi-task learning in computer vision named cross-stitch. Cross-stitch layers are introduced to allow each of the two network paths (registration and segmentation), to leverage the features from the respective other task via a learnable weighting. The use of cross-stitch for segmentation of prostate CT scans has been compared to non-jointly learned task models (CNN based segmentation or registration), classic iterative image registration (elastix), and a hybrid method consisting of learned segmentation followed by classic iterative registration (elastix). This paper is well written and the authors motivate the approach in the context of their application. While cross-stitch has been extensively evaluated on multiple computer vision tasks (dual task learning for two different related tasks), this paper introduces this idea to the medical community to jointly learn segmentation and registration, two pillars of medical image analysis, by intertwining both task-specific networks at the architectural level (in contrast to loss term only). As the results on the independent test set demonstrate, and as openly discussed by the authors, the approach suffers from weak generalization to different scanner and/or institution (contrast change). This is a weakness of all compared learned approaches, except for the hybrid method of learned segmentation followed by classic iterative registration. A weakness of the methodological contribution is the limited novelty, considering that this is merely the application of an idea proposed for multi-task learning in computer vision to the medical domain. I see a main interest in this work for introducing the medical community to the idea of cross-stitch for multi-task learning, namely for joint registration and segmentation. The methodologically novelty is incremental, but combined with the comparison to alternative approaches, of sufficient interest to conference attendees.""",3,1 midl20_133_3,"""The authors propose a new network architecture to jointly learn multiple task (segmentation and registration) by using cross-stitch units. They perform a detailed evaluation of their method on the task of recontouring in adaptive radiotherapy (CT-CT follow-up registration/propagation). The proposed method shows good results on the dataset on that the network was trained on. However, on an independent dataset the performance is less good. The authors present a new method for jointly learn registration and segmentation by introducing a new architecture that combines both tasks in one network. They show that combining multiple tasks (segmentation and registration) can help to increase the performs of both tasks. The authors perform a detailed evaluation by comparing their method with: * a single-task registration network * a single-task segmentation network * a registration network which includes the segmentation in the loss function * a joint registration segmentation network using hard parameter sharing * Elastix * a recent deep-learning method which uses a discriminator network for giving feedback on the warped images and contours. The authors are aware that a different number of filters in the used architectures might falsify the result. They took care of that by adapting the number of filters for each architecture so that all networks approximately have the same number of filters. The authors show the transferability of their trained network by evaluating it on an independent dataset which wasnt used during training. However, the results are less good on this dataset. The authors dont describe if and how the deformation field patches are combined into an overall deformation field. Without this, the proposed method is more useful for the segmentation task rather than the registration task. Especially for the dose accumulation, a full deformation field is needed. The authors dont evaluate the number of foldings. However, this is an important measure to evaluate registration quality! There are still a number of unanswered questions. The authors propose an interesting new approach for joint segmentation and registration with an detailed evaluation. Therefore this work should be presented at MIDL. However, there are still some questions open that should be answered. """,3,1 midl20_133_4,"""The authors applied principles of multi-task learning towards adaptive radiotherapy. The authors investigated the use of Cross-Stitch modules to improve the performance of organ at risk (OAR) segmentation and the registration of CT scans. The authors demonstrated that a multi-task network trained with Cross-Stitch performed best when validated on data similar to the training distributions. The authors also show that the Cross-Stitch method outperforms single task and vanilla hard-parameter sharing on the independent validation set whilst being competitive with respect to domain-specific strategies. The authors have successfully demonstrated the applicability of training a 3D multi-task network for image-guided radiation therapy. [1] The paper is well written and it is mainly clear what the experiments were in addition to the main observations [2] The quantitative results are very good and the qualitative results nicely illustrate the benefit of training tasks in a multi-task setting. [3] The results on the independent test set demonstrate that the Cross-Stitch network can remain competitive when trained on datasets stemming from a different medical centre. It is interesting to see that the performance of single-task networks drops dramatically but the cross-stitch and hard-parameter sharing networks still perform adequately. * In my opinion, this is mainly an application paper as there is no new methodology presented. However, for an application paper, I find that the experiments are lacking. It is known in the literature that multi-task learning is likely to improve performance of tasks in comparison to single task networks. If the main objective was to showcase that multi-task learning can improve segmentation and registration tasks for radiotherapy, then various multi-task learning methods should have been compared with discussions centred upon which methods can benefit radiotherapy the most. There are numerous other techniques, which could additionally have been tested such as: multi-task learning with homoscedastic task uncertainty [1], with heteroscedastic uncertainty [2] and with gradient normalisation of task gradients [3]. [1] Kendall et al. pseudo-url [2] Bragman et al. pseudo-url [3] Chen et al. - pseudo-url The authors have demonstrated well the capabilities of multi-task learning for joint learning of image segmentation and image registration for adaptive radiotherapy. The authors demonstrated that using Cross-Stitch modules produces the best results when compared with single task networks, hard-parameter sharing networks and other methods. The results are good and demonstrate that multi-task learning should be used. However, I find the experiments lacking for an application paper with no real, new insights; either about the analysed methods or the clinical applicability.""",3,1 midl20_134_1,"""The authors propose a new deep learning framework for combined segmentation of retinal layers and fluid. They propose a new network architecture that merges a U-Net and a FCN, and apply it in a cascaded way to include prior knowledge about the location of the retina in the scan. The paper describes a combined repertoire of techniques (much of it appears to be based on prior work) that together leads to satisfactory performance. Performance on the task at hand seems to be the main goal of the authors, obtained through many ad-hoc and empirical decisions, which makes it hard to extract methodological contributions that have value in the broader field of knowledge. - The paper is generally well written, well structured, and easy to follow. - The paper contains a combination of many useful techniques (loss functions, network architecture, prior knowledge, post-processing) and the authors explain clearly how the combination of all of these can lead to increased performance. - There is no clear scientific hypothesis or experimental validation behind much of the methodology. Many decisions seem to have been made ad-hoc (e.g. the combination of loss functions, value of the weights, the combination of the different layers in the proposed network architecture). This makes it much harder to identify the value of proposed novel methods outside the context in which they were developed. - Going forward from the previous point: in such a setting with many manually tuned hyper-parameters, the comparison with other architectures/settings (as in Table 1) is a bit problematic. If hyper-parameters are optimized for the proposed network architecture/cascading setting, we cannot assume they are equally well-suited for other settings. This may bias the results towards the proposed setting, which is especially problematic since no statistical validation has been performed. - The cascaded setting of incorporating prior structural knowledge is not completely novel. Although not identical, it is very similar to the approach in [1], which has not been mentioned in the references. [1] Venhuizen, Freerk G., et al. ""Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography."" Biomedical optics express 9.4 (2018): 1545-1569. This paper mainly describes all the steps that the authors took to get to satisfactory performance. Although there are some interesting concepts that could be useful in other domains, it is very much tailored towards the specific application. """,2,1 midl20_134_2,"""The manuscript presents two cascade networks for segmenting retinal layers and fluid from OCT scans, exploiting the concept of anatomical constraints introduced in [Lu et al., Medical Image Analysis 2019]. The addressed problem is relevant for the CAD community. The methodology presents some kind of innovation in how the anatomical constraints are computed. The manuscript addresses a relevant and up to date problem, is well written and easy to follow. The proposed solution is novel and may be exploited also in close fields. Several experiments are performed to support the authors' investigation hypotheses. Some methodological details are missing, hampering a proper understanding of the proposed methods. The experimental setup can be improved, for example by providing more performance metrics (right now, only the Dice similarity coefficient is provided). The survey of the state of the art can be improved by better highlighting limitations of current methods in the literature. The difference with respect to [10] and [7] should be highlighted better. I enjoyed reading the paper. The quality of the manuscript matches the MIDL requirements in terms of novelty and readability. The proposed methodology (i.e. the inclusion of anatomical priors in the segmentation pipeline) can be of inspiration for work in close fields.""",4,1 midl20_134_3,"""The paper proposes a cascaded deep network based on UNet architecture for the segmentation of retinal layers and fluids from OCT images. First network segments the ILM and BM layer. Next, a relative distance map is computed from the output of the first network. The distance map with the input is fed to the final network to obtain a segmentation of 6 retinal surfaces and fluids. Finally, a Random Forest is trained for post-processing to remove the false fluid regions. 1. The main contribution of the paper is the cascaded network and the relative distance map. The 2. The results justify the use of cascaded networks and in turn, evaluates the effect of the relative distance map. 3. The experiment to show the use of considering consecutive slices for segmentation is reported. 3. The authors compare their work with the two different CNN network to highlight the benefit of the relative distance map. 1. How expensive is distance map computation? and the RF post-processing step? 2. Is the training performed end to end or the cascade network is trained individually? 3. Why does your model tend to have more FP regions compared to RelayNet or UNet? 4. Is there thresholding applied for the output of deep network (UNet, RelayNet or LF-UNet) to get the fluid regions? If yes, could a different threshold select higher FP regions allowing Random Forest to improve the performance? The paper is clearly written. The authors justify the use of the distance map with the cascaded network to segment retinal layers and fluids. The authors could have compared the results on publicly available datasets to better benchmark the paper against the cited reference. """,3,1 midl20_134_4,"""* A two-stages deep learning approach for retinal layer and fluid segmentation in OCT. * A combined architecture mixing a classical U-Net with dilated convolutions in the decoder. * A relative distance map is used in the second stage to incorporate prior information regarding position of the layers. * A Random Forest classifier is used after the network to remove false positive detections in the fluid area. * Experiments on an in-house data set shows that the method outperforms two baselines (a U-Net and a RelayNet) in terms of average Dice. * High performance (as measured by average Dice) for layer segmentation, still poor for fluid segmentation. * Highly applicable to quantify layer properties in diseased patients suffering from macular diseases such as age-related macular degeneration, retinal vein occlusions or diabetic macular edema. * The architecture proposed in the paper is novel in the sense that it integrates dilated convolutions in the decoder to capture multiresolution features. * The incorporation of the relative distance map is not novel as it was already used in other medical imaging papers. However, it is novel in this specific application. The experiments in principle suggest that this prior knowledge aids the method to better segment the retinal layers in comparison with two other baselines. * The two-stages approach outperforms two baselines in terms of average Dice. * The problem is of interest for the retinal image analysis community. * It is not clear if the improvements in performance can be attributed to the change in the U-Net architecture or to the incorporation of the two-stages approach. I would suggest the authors to include an additional experiment showing the performance of a single-stage LF-Unet model. * The results are poorly presented on a table, in terms of average Dice, without including standard deviations and statistical tests showing the significance of the improvements in performance. The paper would be benefited by replacing Table 1 with a box plot showing the distribution of Dice values in the test set, and results of t-tests or Wilcoxon signed-rank tests comparing those distributions. * No qualitative results are included in the paper. As a result, the reader won't be able to observe the impact of the low performance for fluid detection. Although the contribution is definitely of interest for the MIDL audience, I'm afraid that the experimental part is missing some relevant experiments and is not well presented. I would suggest the authors to implement the changes I suggested to cover this point. If they do that, I would be more than happy to accept the paper.""",3,1 midl20_135_1,"""The authors propose to solve artery/vein classification in two sequential steps that are trained end-to-end. A first model solves the segmentation task, and then the resulting blood vessel prediction is used as a kind of attention mask on top of the retinal fundus image (by point-wise multiplication) that a second model aimed at performing artery/vein classification uses as its corresponding input. There is a further contribution given by a novel post-processing technique to make predictions more consistent. - The idea of sequentially segmenting and classifying pixels in this specific problem is simple and elegant. - The post-processing technique seems quite natural and makes sense. - Results appear to be reasonably good, although I have some reservations (see below). - There is lack of very important details in section 2.1. Specifically, I can't find anywhere no mention about the loss function that was minimized. Was there a loss for iternet and a separate loss for the artery/vein branch? Were both sub-networks trained jointly, or IterNet was trained first and only then the artery/vein module was trained afterwards? Learning rate, batch size, optimizer, data preprocessing, and any other technical detail that would allow to reproduce this work are missing in the paper. - As with any post-processing technique, there are several hidden parameters that need to be tuned by hand. For instance the threshold in eq. (2) to obtain a binary vessel map, or m_A in eq. (7), to name a few. In reality, decomposing a vessel segmentation into segments is a very noisy process and all these parameters may need some tedious adjustment when moving from the training dataset to another dataset. However, the authors avoid this issue in a rather ""suspicious"" manner: they only use DRIVE and INSPIRE for testing purposes. But since INSPIRE does not have pixel-wise annotations, they just use the segmentation given by IterNet, and therefore the hand-tuned values that they used for DRIVE segmentations will likely be also valid for INSPIRE. I believe we need more experimental evidence on other datasets in order to understand if this really works ""universally"", see below. - Results for A/V classification only come in the form of Accuracy. Evaluating this problem is very tricky, because there are some questions left. For instance, what happens to pixels that their method did not find to be vessel pixels, are they included in the accuracy computation for A/V as missed pixels? Or the same question for false-positive vessel pixel, how do we handle them when evaluatiing A/V? Without answering these questions, it is really hard to compare with other people's work. Please see below for some solutions. - It should be clarified that the method was not retrained on INSPIRE before computing results there (which I hope was the case). - References are a bit strange, at least the one for IterNet: it does not even mention the journal/conference/arxiv? on which it was published. The ideas presented in this paper are simple and nice, and I believe there is some novelty in the approach you are proposing. I would be happy to improve my rating if 1) missing technical details are added, and more importantly 2) a more rigorous evaluation of the results, with more datasets and following the steps I indicated above, is reported.""",2,1 midl20_135_2,"""The paper proposed a multi-stage network to perform both vessel segmentation and artery/vein classification. To ensure the label consistency of pixels on the vessel segment, the authors proposed heuristic methods for post-processing. The segmentation network directly borrowed from Li et al., 2019 and the classification is a simple UNet. -The paper is well organized and easy to read -Figure 4 gives nice demonstration on how the propagation works -The paper has a nice overview about the recent works -The main contribution is clearly highlighted -Details on how the network got trained are lacking, for example: -what's the train/test split? -what's the augmentation, optimizer and associated hyperparameters? -The post-processing method are hand-crafted with a few empirically selected parameters, for example m_A in equation (7). The authors should have discussed how sensitive the post-processing is to these hand picked parameters? The paper proposed a multi-stage network to perform both vessel segmentation and artery/vein classification. I feel the main novelty lies in the post-processing part where the authors proposed techniques to ensure intra-segment label consistency and inter-segment label propagation""",3,1 midl20_135_3,"""The authors used a cascade U-Net like architecture to segmentation retinal vessels and classified artery and vein. The key idea is to classify the A/V inside of the segmented vessels to exclude the background regions. The proposed post-processing is used to refine the classification results. The authors accomplished a complete segmentation & classification task for retinal vessels. 1. An unified segmentation and classification framework for retinal vessels. 2. A/V classification is performed inside the segmented retinal to suppress the imbalance problem. 3. The proposed post-processing procedure efficiently improved the classification accuracy. The post-processing procedure seems to be generally designed, it can be used on other methods theoretically. 1. The main idea of the proposed SeqNet is to classify the A/V inside of blood vessel region (foreground) to handle the imbalance problem. However, a lot of research has proved that many deep-learning based methods (such as previous work DS-UNet) can handle the imbalance problem between foreground and background quite well. If the imbalance problem is really affect the classification accuracy, the author should give the corresponding comparison experiments. 2. The author should give more detailed validation experiments. Since the improvement of segmentation accuracy is also one major factor affecting the classification accuracy, the authors cannot claim that the improvement is achieved by the joint learning architecture. The authors should rethink their experiments and set baseline properly. 3. I doubt the effectiveness of the joint learning framework. I think the proposed joint learning is equal to a cascade learning framework or maybe even worse. The final loss (CE?) contains the background part (needed for segmentation network), but this background part will still have effect on classification network. This joint learning framework does not tackle the imbalance issue, theoretically. What about train the classification network with a well-trained segmentation network? Additional comparison experiment is preferred. 4. I think the authors should focus on the post-processing part, since the neural network part confused me and not well explained. Exploring the effectiveness and robustness of simple rule-based post-processing could be interesting (experiments need to be re-designed). This paper aims to propose a novel network architecture to improve the retinal A/V classification accuracy along with a newly proposed post-processing method. However, for the neural network part, both theoretical and experimental proofs are not satisfied. Through the whole paper, this work 'SeqNet' seems emphasize the deep-learning part rather than post-processing. I give strong reject rating. """,1,1 midl20_135_4,"""The authors describe an approach to segment the vessels in retinal images and additionally perform artery/vein separation. The method consists of an initial (U-net-like) segmentation, which is subsequently refined with an IterNet. The resulting final segmentation is fed into a netwerk that does the artery/vein labeling (pixel-wise). These classification results are subsequently improved in an (ad-hoc) post-processing step. In this processing step, a segment-wise majority-voting is applied, using either only the pixels in the segment, or also including a weighted contribution of neighboring segments. In the latter case, the weights are determined on various geometric properties that quantify how likely the two segments belong to the same vessel. The manuscript is well written, a pleasure to read, and the methods are clearly explained. Whereas the Deep Learning part seems to be a straightforward utilization of existing techniques, I appreciate the combination with traditional approaches to include more global information in the processing, thus addressing the more local nature of U-net like approaches. The authors apply there method on two existing databases, and demonstrate (a minor) improvement in performance. When assessing the DL part only, the results are inline with other approaches; the results demonstrates that the added post-processing is indeed able to improve the performance, bringing it slightly above other approaches. Some issues could have been addressed. First, from the table I conclude that some approaches have a different performance over the two datasets, whereas the results of the authors approach seems to perform similar. I presume this is related to the training data (including images from both sets?). Information on the exact training, and possibly some discussion on this, may further improve the manuscript. Whereas the results seem to improve consistently, the differences are minor. It would be relevant to 1) check whether these differences are statistically significant (if possible), and 2) discuss these improvements in the context of the clinical applications: how much would the patient (or physician) be off with the improved classification results. In addition, add the value of all parameters (such as m_A) that were used in the final method. Similarly, if the networks used are not identical to the ones that the authors referred to, please specify the exact configuration (depth, initial nr. of features, drop-out, etc.) Nice, well written manuscript with interesting method that combines Deep Learning with a 'traditional' post-processing approach. Method is assessed on two known databases, and performs (slightly) better.""",4,1 midl20_136_1,"""The paper is particularly hard to read and with a poor English. Authors must correct the grammar and spelling mistakes if the paper is accepted. Pros: - challenging and interesting problem: multi-modal registration with deep learning - combining domain adaptation and registration is a novel idea - explores some interesting concepts such as approximating the Earth Mover's distance via a 1D projection Cons: - 2D approach. Seems hard to extend to large 3D volumes - poor English - experiments are unclear. The registration is not directly evaluated. In particular, are the registration outputs well regularised? """,2,0 midl20_136_2,"""This paper describes a domain adaptation-based unsupervised learning of medical image registration. Preliminary results show that coarse patch-based displacement classification can be performed well using domain adaptation-based unsupervised learning, and show improvement over traditional methods. The paper is well written and concepts are explained as well as they can be in the limited 3-page space. Pros: Use of deep learning concepts are well justified in the paper. Every decision comes with an explanation, whether it the type of loss function used, they way weights are updated to prevent over-fitting, or the way predictions are scaled (explained with citation). This is a welcome change from reading several deep learning papers that simply use some network architecture, loss functions, parameters, etc. without explanation. Cons: The accuracy numbers are low for all the reported methods. It would be nice to intuitively understand how the accuracy numbers translate to actual registration errors. This may be hard to compute but authors could use something like SSD, for instance, to understand what kind of registration errors are produced when the accuracy of the network is ~40%. """,3,0 midl20_136_3,"""The approach views registration as a discretized multilabel classification task, and exploits the maximum classifier discrepancy (MCD) idea from the field of domain adaptation. This allows to exploit annotations on a given modality to train a registration network on other target domains. The main contribution is in the way the discrepancy measure is set, using 1d projections of the 2d histogram of displacement label probabilities to both preserve spatial information and retain ease of computation. The approach is demonstrated on 2D patches. - What is the motivation for domain adaptation? ""Gathering labelled training data for learning-based multimodal registration is very time-consuming and expensive"" True, but the application chosen here doesn't really relate to this point. Also, T1/T2 to illustrate domain adaptation is a bit stretched as this can be performed with standard metrics (NMI, etc.) - Along the same line of thought, the approach restricts to the use of a shared feature CNN between source and target domains, which may limit how different the domain appearances can be; empirical validation in a truly multimodal setting would be useful here (alternatively showing improvements over standard un/supervised registration with multimodal image similarity metrics). - If only one result is reported regarding accuracy, it should be directly in terms of displacement error rather than label accuracy. The latter only relates to the specific choice of formulation and is tough to interpret.""",2,0 midl20_136_4,"""This paper proposes an interesting and still new application of domain adaptation, image registration. The purpose is to adapt a network trained to estimate the displacement on MRI T1 patches to T2 patches. The proposed network has access to pairs of patches in the source domain, with their displacement label. In the target domain, it only has access to displaced patches. The idea to make displacement ""similar"" in the source and target domain is to match the histograms of displacements. For this end, a projected Earth Movers distance metric is proposed and compared to Wasserstein distance. The ideas our straightforward. Nonetheless, a more friendly introduction to histogram distances could have been proposed. For example, it could have been beneficial to show histograms, and the resulting Wassertein distance versus the proposed one. A results table comparing baseline without adaptation, Wassertein, and proposed method could have been produced for clarity. The results/ discussion section is limited, so the paper could be better organized. Overall the idea is nice and the application still new.""",3,0 midl20_137_1,"""Summary:- This paper develops new deep learning model for simultaneous segmentation and survival regression using Cox method which is a combination of 2D U-net and a residual network. Results are evaluated on synthetic data which model segments circles of varied sizes. This paper opens up about combining workflow for segmentation and regression in medical imaging. Strengths:- This paper proposed a way to simultaneously segments an object and regress it using CPH-inspired deep learning model. Paper has potential to survival prediction application. Weaknesses:- Paper validation seems to be week. Synthetic data should be as close as possible to the real data. In general, gaussian noise is a too simplistic assumption for any realistic data. Introduction is quite week with no citations of recent work in survival prediction applications. Major comments:- Paper is very weekly written with the consideration of very simplistic synthetic dataset. Figures quality needs to be improved. More details about CPH model is required. What is the batch size being considered during training? Also, please mention about inference timing. Results and discussion sections need to be elaborated further. """,1,0 midl20_137_2,"""The authors propose a multi-task architecture to simultaneously perform image segmentation and survival analysis. The authors investigated three different architectures to analyse how the learned representations affect the performance across tasks. An experiment was performed to a synthetic dataset consisting of a randomly located disc in an image corrupted by Gaussian noise. The segmentation task was segmentation of the disc and the survival analysis/regression task was the disc area. The authors illustrate on this pilot experiment that the SR net architecture performed best, demonstrating that intermediate representations used regression achieved best performance. [Strengths] * This is an interesting problem, which is worthy of study. Multi-task learning in itself can be thought of as a representation learning problem and studying how to learn representations from medical images that enable accurate survival analysis is important. * The idea to investigate which representations improve the regression task is interesting [Weaknesses] * For a synthetic dataset, I think the tasks are too simple and easy and it is difficult to draw any conclusions from this. The regression task is directly correlated to the segmentation task and could just be evaluated directly from the segmentation. * Generally in multi-task problems, the performance on single-task networks is also included to showcase the benefit of jointly learning tasks. How good was the regression on its own? * Why is the Cox-nnet loss function even needed in this scenario? Standard regression loss functions should have been compared to the partial log-likelihood of Ching et al. * It is disappointing that there were not any experiments on medical data despite being a pilot study. Furthermore, the methodology is not novel enough to warrant such a simplistic synthetic experiment. As far as I can see, the novelty lies only in using a segmentation task in an multi-task setting to help learn representations for survival analysis. [Suggestions/Further questions] * Performance metrics on the training set (Table 1) are not needed * Being a synthetic experiment with infinite data, was the training set divided into train/test? * What does probability of survival have in relation to circle area? * The regression task was circle area. How can this be used a substitution for risk prediction?""",1,0 midl20_137_3,"""The authors proposed a multi-task learning for simultaneous segmentation and survival analysis. The authors compared three architectures and validated that SR-Net can have best performance. Overall, it is a very interesting work as so far not much work investigate how segmentation and survival analysis can be trained together. 1. It is a pity that the authors only conducted experiments on the synthetic data. The proposed model has a very good potential use on real CT data for both segmentation and survival prediction. I am very curious about how the model work in practice. 2. It is not clear about how to use partial log-likelihood regression loss in your study. As we know, Cox Proportional Hazards Regression is designed to handle time-to-event predictions with censoring. However, in your study, the loss is used to regress the area of each circle. Why not use other regression loss like rmse ? How did you assign censoring label to your sample ? """,2,0 midl20_137_4,"""the paper is well-structured ane easy-to-follow, still I think important details are missing: - the motivation of using a CPH model instead of other modelling of non-linear variable interactions, - evaluation of more realistic cases, as the synthetic task is perhaps too simple - statistical significance tests via cross validation - some attention-based module to show that lesion localisation really helps the regression""",2,0 midl20_138_1,"""This paper addresses the problem of developing a deep-learning reconstruction method that is flexible enough to handle multiple acquisition contexts. This is achieved using a reconstruction module and a dynamic weight prediction module. Results are demonstrated in several different reconstruction settings. Using a reconstruction module together with a dynamic weight prediction module is creative and novel. This paper addresses an issue with very high practical relevance. There are many imaging situations where the acquisition context is novel. While classical reconstruction methods still work well in these situations, modern deep-learning methods generally do not. The paper uses unrealistic simulations but does not discuss the limitations of these simulations. This is an important problem to remedy, because unrealistic simulations generally have different performance characteristics than real data. It can cause a lot of confusion and set a bad example for future research if these issues are not addressed. The problems with the simulations include (i) the paper simulates k-space data by taking the Fourier transform of magnitude images. Real images have phase, but these simulated images will not. This means that the simulated data will have perfect symmetry and is much easier to reconstruct than real data would be. (ii) The simulations do not include parallel imaging, which is the standard modern approach. (iii) There are many real k-space datasets available, I don't know why the authors didn't use some of these instead of performing unrealistic simulations. If these issues are not fixed, they at least need to be listed as limitations and readers need to be properly warned about the interpretation of the results. This is interesting and creative work. There are some limitations, but these are fine as long as they are properly disclosed. I think this paper is very well done, and is worthy to receive some recognition for that.""",4,1 midl20_138_2,"""The authors present a dynamic weight prediction module that allows reconstruction models to generalize better. This is done by training the weight prediction module is conditioned on the acquisition context vector. This means the weights are modulated by the context showing improved generalisability to unseen contexts. The objective is well motivated and described. Generalization is very important in medical imaging and there is a clear need to handle out-of-distribution samples. The method is detailed to an appropriate level for understanding and implementation. The experimental results are extensive, detailed and cover a range parameters space on two appropriate datasets. The results look convincing and mostly well presented. The results could be presented better. Tables 1, 2 and 3 have very small font and the relationships between parameters and output is hard to gauge from reading. It would be greatly improved by visualizing these relationships in plots. The results are impressive but would be improved with further analysis to check significance of improvement against baseline methods. I believe this is a proper contribution to the field of generalizability of medical image reconstruction. The method is based on dynamic weight prediction that modulates the parameters of the reconstruction network by conditioning them on a given context encoding. The results are consistent and comparable/better than related work. The drawbacks are mainly to do with presentation (tables are small and hard to gauge). And the impact would be improved by checking difference to baselines for significance. """,3,1 midl20_138_3,"""This paper proposed MRI reconstruction framework that is flexible to multiple acquisition context and generalizable in real scenarios. To this end, the authors proposed reconstruction module and dynamic weight prediction module which takes acquisition context vector as input. Experimental results support the effectiveness of the proposed method. The authors encoded the combination of input settings (anatomy, undersampling pattern, and acceleration factors) as acquisition context vector, and used it for flexible MRI reconstruction as a single network. This is important since the context-specific model demands lots of computational burden in practice. - Lack of details of how to combine DWP block and CNN block in section 3.1. The notations are very confusing since the network weights W in eq (1) is the weights of CNN block, which is in general independent of acquisition context. Then, the authors used the same W for the output of DWP block. Also, the authors are missing period or comma in many places. - Need more detailed information about dataset such as original dimensions of cardiac data. How are the images cropped? - Need comparison of model complexity with joint context model. How much is it increased due to DWP module? The method was not fully explained. The authors proposed dynamic weight prediction block, but it is hard to understand how to encode the acquisition context and how to combine this information with CNN module.""",2,1 midl20_138_4,"""The authors propose a framework to utilize one model under different acquisition context scenarios. A novel dynamic weight prediction model is proposed to learn to predict the kernel weights for each convolution based on different context settings. Experiments show that the proposed method outperforms the model trained on the context-agnostic setting and acquires similar results to models trained by context-specific settings. 1). - The idea of learning convolution weights for different input image quality is novel. 2). - The method part is well-written and easy to understand. 3). - It conducts extensive experiments for three different settings and the results demonstrate the effectiveness of the proposed method. 1). - Opposite to the Method part, it's hard to read the abstract and introduction. Some typo problems lie here. 2). - It seems that the DWP need to generate a specific weight each time. The authors do not compare the inference speed of the proposed method with others. 3). - In Table 3., the result of the proposed method is slightly higher than the CSM. There can be more discussion here. The authors propose a framework to utilize one model under different acquisition context scenarios. The method is novel with extensive experiments. Results show the effectiveness of the proposed method. But the writing needs to be improved. Therefore I recommend the weak accept. """,3,1 midl20_139_1,"""The authors propose using separately computed density maps as manner of attention in segmenting white matter lesions in cranial ultra sound images. The application of UNet to cranial ultrasound is very interesting. The performance of the proposed method improves in terms of dice, however, the sensitivity is significantly lower compared to a vanilla Unet. Pros: 1. Problem statement is incredibly hard. The results are good considering the difficulty. 2. The paper is an easy read. 3. Density estimation is potentially a good idea. However, the manner of computing is the density maps needs explanation. Are the images registered? 4. The combination of balanced cross entropy, balanced focal loss is less used in the community. The results are encouraging. Cons: 1. Comparison to self-attention mechanism is missing since that is the main change in the paper. 2. Severe dip in the sensitivity. This could be due to the density maps multiplied with the feature maps so less likely occuring lesions actually get very low attention. This is probably not ideal. 3. More explanations on parameter choices such as gamma and beta is necessary. The paper does not propose any novel methodology or empirical insight. However, the application is quite novel. This alone does not warrant an acceptance. The results are also not convincing considering vanilla UNets provide better sensitivity indicating the proposed method is not adequate in picking smaller lesions. """,2,1 midl20_139_2,"""This paper proposes a novel deep network, priority U-Net, based on the UNet architecture to perform semantic detection and segmentation of PWML disease on 3D cranial ultrasound images from 21 preterm babies. The performance of the proposed method was compared with the U-Net on a dataset consist of 547 images. Recall and precision improved in the detection task and the Dice metric is also increased in the segmentation task. The paper is well-written with appropriate introduction. The size of dataset ( 21 babies) is relatively good. Results are promising with superior performance compared to the previous work good visualization of the method/results There are some vague parts which I explained to be addressed in the rebuttal and paper. There are some arbitrary parameters that I miss the rational behind them. For example, how did the author determine the level which the attention maps should be fed to ... The paper is well written, the results are promising and better performance was achieved compared to the previous work on this problem. The references are up to date, and the results were visualized with various figures which provides the reader with faster understanding. """,4,1 midl20_139_3,"""Authors present their work on detecting large PWM lesion in 3D cranial US images. They have implemented a U-net for this and introduce prior density maps into this network, to improve overall detection performance. Experiments demonstrate the original vs modified U-net with different loss functions. The best method reaches a sensitivity of 50% with good precision. This appears to be the first work on PWM lesion detection in 3D cranial ultrasonography. This application is important and the work contributes well to this problem. The gold standard is MRI, which is not always available and hence cUS is used. The inclusion of prior maps into the u-net is a good addition. The main weakness of this work is the selection of lesions that are included in the evaluation and results. While the median lesion size is 0.413 mm3, authors only include lesions larger than 1.700 mm3. Figure 3 is not very clear on this, but this is obviously a small fraction of the lesions. This raises two questions: (1) is the problem still relevant if only large lesions can be found and (2) is an automated method needed to find large lesions, since there are very easy to find visually owing to their large size. The application is very original and well chosen by the authors. The methodological contribution is not that large, but including priors into a method is a nice idea and this is an ok solution. The performance of the method is moderate, but since this is a very hard task and one of the first methods on this, it deserves to be presented/published and will likely inspire future works that will improve upon this.""",3,1 midl20_140_1,"""This paper introduces a method to learn a system based on convolutional networks for classification of whole-slide images. The approach assumes that parts of the whole-slide image can be grouped based on unsupervised cluster analysis, and therefore only representative patches close to cluster centroids are involved during training, making the system trainable end-to-end. The method is validated on two applications and three datasets: cancer detection in prostate and basal cell carcinoma, and lung adenocarcinoma growth pattern classification. Results on the first application are close to a recently presented state of the art system on the same test set; no comparison on the second application is presented. * valid alternative to existing solutions for weak supervised learning in histopathology image classification * method trained and validated validated on fairly large datasets * efficient use of computational resources by using cluster analysis * results close to the state of the art for the detection method, but slightly worse (compared to MIL-RNN). Authors should explain what is the added value of using the proposed method in this application. If its strenght is in computational efficiency, then a specific comparison in this direction should be presented. * Authors state that this method could be used to predict survival but no experiment is reported. The title of the paper is ""beyond classification"", but in the end only classification is shown. * Figure 1 is quite confusing, despite the description in the caption, it is quite hard to follow the workflow of the method. What is training set? What is test set? Where is the training of model parameters happening? Consider rearranging the order of the components. * From Figure 1, it seems that clusters are made using patches from the full dataset, and later patches from the training set are mentioned. Does this mean that clusters are defined using patches from images of the validation and the test set as well? This should be clarified. * It is not clear what the contribution of Figure 5 is, as colors are not explained, and tissue under colored patches is difficult to recongize. This paper introduces a method to train CNN end-to-end for whole-slide image classification. It presents an approach that is novel compared to existing work in the field and therefore represents an important contribution. The paper is well written and relatively easy to follow. Results are in line with the state of the art. More datasets should be analyized, also aiming at predicitng survival.""",4,1 midl20_140_2,"""This paper presents an end-to-end part learning (EPL) approach as an alternative to conventional two-stage approaches (tile encoder and tile aggregation), generally used for analyzing histopathology whole-slide images (WSIs). Each WSI is clustered into k groups, the proposed model jointly learns the class label for image patches and clusters global centroids. This study is performed on 3 datasets, including prostate cancer, basal cell carcinoma, and lung cancer. The lung cancer dataset contains multi-class labels and the other two datasets treated as binary class problems. Overall, the manuscript is well-written and addressing a relevant problem by proposing an interesting end-to-end part learning method. The performance of the model is validated on 3 datasets from different indications. Below are some minor/major comments - Table 1 lacks comparative analysis (especially for the multi-class problem) between conventional two-stage classification approaches and the proposed approach which makes it difficult to quantify how well the proposed approach is performing. The performance with an existing MIL approach is comparable (nearly same). - The proposed approach is heavily based on some of the hyper-parameters, for instance, k, there is no empirical evidence that how one should select k and how the performance of the model will vary by changing k. Besides, the value of k is different for different datasets. Another parameter is p for centroid approximation, again, no ablation experiments are performed to validate the robustness of the selected value for p. - In section 3.3, it is stated that the whole training data were randomly split into k groups. Is this the best way to split data into k clusters (centroid initialization methods)? - During inference, is the proposed approach less computational expensive then MIL or conventional two-stage approaches? It would be worth reporting the run-time (e.g in seconds) or computational complexity or number of learnable parameters. - In Table 1, the last row where k=1, with the BCC dataset the model performance is 0.93, if during inference only k tiles are fed through the model to output the slide prediction then it would be of interest to show that one patch that classifies the WSI label. - In mathematical notations, some of the variables are not defined properly like N, the relationship between a slide S and X and xi is not clear. Some very minor comments, - The following sentence needs a revision: binary cross entropy after sigmoid activation was used for multi-label prediction. - Title of the paper: Classfication -> Classification. - In out experiments -> In our experiments. The paper presents an interesting end-to-end part learning approach by using 3 datasets but some of the key comparative and ablation experiments are missing. There are some inconsistencies in mathematical notations.""",3,1 midl20_140_3,"""This paper demonstrates a novel deep learning architecture for whole slide images (WSIs). To reduce the computational burden of processing WSIs, all tiles from the WSI are encoded into the feature space and clustered into groups. The model weights are then learned for each cluster. In inference, a single tile from each group is used to make a decision. This greatly reduces the number of tiles that need to be processed to render a decision. This architecture is compared against a well-performing existing model and found to have similar performance. The novel architecture described in this manuscript seems to be based on solid mathematical and machine learning foundations, and is adequately described here. The use of an external, publicly available dataset is a strength. The grouping of tiles into clusters lends some interpretability to the model, as shown in Figures 3, 4, and 5. The need for this model is unclear. WSIs are difficult to process, but this model appears to perform exactly as well as the MIL model it is compared to and worse than MIL-RNN, weakening the motivation for adoption of this new architecture. Redesigning Figure 1, describing the model, would make it easier to understand this paper. This paper has no glaring flaws and there is a need to develop deep learning architectures specific to digital pathology. The model design is reasonable and suited to the clinical problem. However there does not seem to be a compelling reason to adopt this architecture over other schemes.""",3,1 midl20_140_4,"""Presented work is interesting, innovative and well described. The authors proposed a two-stage method for WSIs classification that that is able to learn diverse features from tiles. The authors performed experiments on large datasets (thousand slides used for training) for three independent tasks. Both, developed methods and presented results are interesting for a science community. Strengths: - experiments on large datasets (thousand slides used for training) for three independent tasks - innovative method - a method is well described - results for prostate cancer and basal cell carcinoma were compared with results available in the literature The paper has one significant issue, that is an acinar pattern in lung classification that disappeared. The adenocarcinoma has five main patterns: solid, micropapillary, papillary, lepidic and acinar. In figure 4 acinar pattern is presented in row 6. However, in the description and results, the acinar pattern is not mentioned. As a result, it is confused if an acinar was a subtype that occurred in the training/test dataset and was detected and why this pattern is not evaluated in the table. It looks as an acinar pas a part of the training dataset but was excluded from the evaluation. Why? As well is it not clear from where came lung cancer dataset includes 599 WSIs (from single or many medical centers)? The authors presented a clear goal and an innovative method. The proposed solution was developed based on large datasets and evaluated on three independent tasks. The main issue that should be explained is the disappearing acinar pattern.""",4,1 midl20_141_1,"""The authors propose some metrics based on thresholded uncertainty to evaluate the reliability of uncertainty estimation methods for deep learning-based segmentation, which is of interest to the community. Effect of these metrics on brain tumor segmentation has been shown. However, the proposed metrics failed to rank different uncertainty estimation methods as in the results. pros: 1, considering the ratio of filtered TPs and TNs is a reasonable idea for uncertainty assessment. 2, the authors showed some results with a brain tumor segmentation task, which helped to understand the proposed metrics. cons: 1, using Dice based on thresholded uncertainty to evaluate the uncertainty estimation method has been proposed before, such as the following paper: [1] Assessing Reliability and Challenges of Uncertainty Estimations for Medical Image Segmentation, MICCAI 2019. Authors in [1] found that based on such a metric, model ensemble had a better performance than other uncertainty estimation methods. But this paper found that there was no obvious winner among different uncertainty estimation methods according to the metrics used in this paper. Could the authors explain more about this? 2, following the above problem, the results didn't show the proposed metrics have the ability to distinguish good and poor uncertainty estimation methods. How to validate the effectiveness of the proposed metrics? """,3,0 midl20_141_2,"""The paper presents an evaluation of recently developed uncertainty measures on Brain Tumour Segmentation. Pros: The paper is well-written and relevant to MIDL topics. Further, it introduces two additional metrics to evaluate the performance of uncertainty on a publicly available database. Cons: Calibration wasn't performed and discussed here. The paper would have been even stronger if a quantitative assessment against labels uncertainty, due to intra/inter-observer variability, was performed. Detailed Feedback: - As you might know, predictive uncertainty is underestimated, and calibration has been recently investigated in this context, e.g. Guo et al. [1]. Some methods claimed better calibration, e.g. Deep Ensemble. So i was wondering whether reported uncertainty methods were well-calibrated on a validation set or not. It would have been better if the methods were well-calibrated first before running the evaluation, or at least the authors have discussed this point in this discussion and conclusion. - One of the concluding remarks that I was hoping to see is the need of novel techniques and tools that measure the labels uncertainty, similar to the work of Tomczack et al. [2]. I think this is extremely important as we need to urge researchers to look at this. [1] Guo, C., Pleiss, G., Sun, Y. and Weinberger, K.Q., 2017, August. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (pp. 1321-1330). JMLR. org. [2] Tomczack, A., Navab, N. and Albarqouni, S., 2019. Learn to estimate labels uncertainty for quality assurance. arXiv preprint arXiv:1909.08058. """,3,0 midl20_141_3,"""- How do these authors envision this approach will be used in clinical practice? How will a radiologist interact with such system outputs that provide uncertainty estimates? - Please provide more information on the modified 3D UNET utilized. Did any of the parameters change during experimentation? - Additional details with respect to the experimentation must be added. - Additional examples capturing the effectiveness of this metric must be provides. """,3,0 midl20_141_4,"""Quality and clarity: - The short paper is well-written and easy to follow. Significance: - The evaluation of uncertainty estimations in segmentation is crucial. Given the amount of existing uncertainty estimation methods, such metrics are critical to compare the produced uncertainty estimates quantitatively. Pros: - The work addresses an important problem. - The proposed metric not only rewards uncertainties in the FP and FN but also penalizes the uncertainties in the TP and TN regions. - Figure 1 and Table 1 greatly improve the understanding of the proposed metric. Cons: - The proposed metric is rather complicated to interpret since it consists of three sub-metrics and requires different thresholds. - The work neither describes how to combine the three sub-metrics, nor it explains how to combine the values at each threshold. Being able to summarize the metric into one scalar value would be beneficial for broader adoption and better interpretation. - The compared uncertainty estimation methods are insufficiently described or cited. Minor: - Typo in Table 1: The TP in the definition of the FTN should probably be a TN. - The work mentions inter-rater variability as ground truth uncertainty. It is arguable if the desired uncertainty of a model should be similar/identical to the inter-rater disagreement.""",3,0 midl20_142_1,"""This method utilized a discriminator to calculate the segmentation loss values. The generator inputs the image and output a segmentation result. Then the image with ground truth and the image with the segmentation result will go through the discriminator which utilizes the perceptual loss to evaluate the segmentation results. The method is not novel, but it is interesting to do some experiments with this kind of setting. 1 The motivation for the proposed method is good. The design of this model is clear. It utilizes the GAN based idea to deal with the imbalanced problems. 2 The diagrams look good and help the reader to follow. 1. More comparisons with state-of-the-art methods should be added to demonstrate the effectiveness of the proposed method. The corresponding visualization results should be also added. 2. More loss functions should be explored. This author just utilized the L2 loss. 1 Methods are OK and the novelties are not significant. 2 The experiment part is not comprehensive. More comparison with state-of-the-art methods, including quantitatively and qualitatively should be added.""",3,1 midl20_142_2,"""This paepr presented an image segmentation framework in an generative model using a auxiliary siamese network trained discriminator as loss function, which produces improved segmentationa and more generalizability when compared to segmentation using the same generator but only with dice-based loss function. - The main message from the paper is the proposal of a novel loss function using content loss, which is generated from a auxiliary Siamese network taking into account of the raw image, the ground truth segmentation, and the prediction. - Transfer learning is used to initialize the segmentation network of the Siamese network with weights trained from ImageNet. - Flattened feature map were used to calculate the loss for the discriminator The reason behind the decision for each points in the architecture design and parameters are explained clearly, such as: In the generator: choose 2D convolution over 3D convolution, the choice of kernel size (3x3), number of dense block operation in the encoder (using grid search), choice of VGG19 in the discriminator Soft-binarize ground truth to avoid gradient explode The volume dimensions on top of the class-specific voxel number are taken into consideration when measuring class imbalance to compute the validation metric Many hyper-parameters are properly tuned, such as: Number of dense-block operation used in the encoder Number of feature mappings used at the max pooling layers Method comparison: My biggest critics is for method comparison: only compared to raw dice coefficient? I would argue it would be more fair to compare the proposed loss function with other state-of-the-art loss function mentioned in the introduction, i.e. Milletari et al. 2016, Fidon et al. 2017, and Sudre et al. 2017. Especially, the proposed method seems still suffer from large class imbalance, although less severe then pure Dice loss (Figure 5-6) Follow up on that, another of my critics is: in page 8, result section, the author discussed that the ""direct correlation between positive class density and segmentation performance"" is ""likely attributes to CNNs in general, as through kernel operations and max pooling, ne spatial information is loss in the process."" However, the Dice coefficient is known to be correlated to the volume size of the segmented mask. The ""positive class density"" in the axis is, by definition, correlated with the volume size of the segmented mask of the positive class/ Without correcting for that correlation, the dice shall naturally be higher for larger segmented volumes. Therefore, this conclusion might be misleading. Actually, this is indeed the reason that other methods which are cited by the author trying to adjust the raw dice before using as loss function. (The fact that in the baseline method, even though dice itself is used as the lost function, the evaluation using the same dice method showed unsatisfying results, indicating that the raw dice is not a good candidate for loss function) In Discriminator: a multiplication is used to feed the critics. The author stated that ""*Intention of multiplying the masks and input image together is to highlight the regions of interest and to allow network to jointly evaluate information from both images.*"" However, would this approach cause the critics to treat the over-segmentation and under-segmentation differently (e.g. over-segmentation get less penalty than under-segmentation, as less information is lost). Rather than of multiplication, how about instead do: adding the mask as a new channel (similar to the idea of dense-block) to preserve full information from both the raw image and the mask Or, using the mask as a weight to guide the critics to focus more on the masked region but less on the non-masked region (the weight itself can also be a learnable parameter)? It seems the discriminator is not further trained in a generative adversarial (minmax) fashion, why not? Instead, pre-trained weights from ImageNet are directly used. This seriously limiting the feature maps that can be used, as well as the flexibility of input channels that can be used (e.g. using 3 consecutive slices as I proposed in the previous section). Preprocessing: 3-copy of input were concatenated to generate psudo-RGB images to utilize the transfer learning. That seems to be a bit under-utilization of the resources. Should it make more sense to instead extract 3 consecutive slices and concatenate them together in order to take advantage of the information in the adjacent neighbouring slices? The paper proposed a good combination of good practice into a newly proposed loss function The paper did a good job explaining thoughts of the experimental design and implementation in great detailed The method comparison part of the paper need to be compared to other recently proposed state-of-the-art loss functions that the author cited in their introduction. The method evaluation in Figure 6 might be misleading. The choice of combining the information in the mask (both ground truth and predicted) with the raw image information (which is simple multiplication) need to be adjusted and, if possible, can be improved""",2,1 midl20_142_3,"""This paper presents a deep segmentation method of white matter hyperintensities (WMHs) in MRI. The architecture is based on a standard UNet model followed by a Siamese architecture that takes as input 1) the concatenated MR image and predicted segmentation mask and 2) the concatenated MR image and ground truth segmentation mask. The squared Euclidean distance between features extracted from the paired inputs of this Siamese network is used as a loss term to update the weights of the UNet architecture. The authors compare performance of their architecture to that achieved with a standard UNet trained on Dice loss. Evaluation is performed on FLAIR images of the MICCAI 2017 Grand Challenge and from the Canadian atherosclerosis imaging network (CAIN). This shows that the proposed architecture outperforms the standard UNet architecture. -The general idea of improving segmentation tasks of small structures such as WMHs is of high interest. -The paper is well written and the state of the art is clearly exposed. -The main contribution of this paper consists in using a auxiliary loss term to train a UNet architecture is interesting. In this paper this loss term is derived from some distance between FLAIR images masked by the reference and predicted binary WMH maps. -Description of this methodological contribution should be detailed. The authors should indeed explain how they backpropagate this siamese loss term in the UNet architecture. The green arrow in Figure 1 is not explicit. -The authors mention in the state of the art section that alternate loss terms than the DICE one have been proposed to better account for class imbalance issues (eg Sudre et al). It would be interesting to compare the proposed model with these architectures. -The authors use a model replicating some part of the Siamese architecture, however, they do not try, if I understand well, to train this Siamese model, using correlated and uncorrelated pairs of images. Weights of the VGG19 core part of the Siamese model are those adjusted on ImageNet without any fine tuning on the MRI dataset. The proposed ancillary loss is interesting, however, i am not convinced that the authors implemented the most efficient way (no training of the discriminator weights, separate training of the generator and discriminator part). The paper lacks some methodological description to confirm the soundness of the proposed implementation. The authors also do not provide comparison with the results of the MICCAI 2017 grand challenge, this would add strength to the proposed method.""",2,1 midl20_142_4,"""The authors proposed a new architecture for the segmentation of white matter hyperintensities (WMHs) in MRI data. WMHs are small relative to whole acquired volume. This leads imbalanceness between training of CNN for the segmentation between WMHs and other parts. To overcome this imbalance problem and improve segmentation accuracy, the authors introduce a Siamese content loss to U-Net like architecture. The new architecture consists of generator and discriminator. The generator, which is the U-Net like architecture, predict segmentation label. The discriminator, which is a Siamse Net, output L2 loss function as Siamese content loss. The two input to Siamse Net are images masked by ground truth and predicted label, respectively. From these input images, the Siamse Net compute feature vectors and evaluate the Euclidean distance between them as the loss. This loss is used for backpropagation in training of U-Net. The methods do not technically sound. As the extraction of feature vectors of an input image in Siamse Net, the authors used the feature maps obtained from the first two max pooing layers in VGG Net, which is the part of Siamse Net. It is known that the shallow layers extract only low-level features such that edges, blobs, and corners (ECCV paper, pseudo-url). Extracted feature vectors by the Siamse Net might represent these low-level geometrical feature on masked images. In addition to this property, regions of WMHs looks less texture in Figure 4. Therefore, the proposed loss function can be interpreted as a just edge- and counter-aware loss. This kind of loss function has been already reported as direct ways like below and other methods. The architecture is complex, but I think the computation can be more simple style. Boundary loss for highly unbalanced segmentation: pseudo-url Loss functions for image segmentation : pseudo-url Futhermore, for the evaluation of the proposed method, the authors trained U-Net with dice loss. Why they adopted the simple dice loss for imbalanced problem is unclear. For imbalanced problem, weighted dice loss is used in commonly. I think the selection of method for the comparison is unfair. Therefore, evaluation with this comparison does not make sense. As I commented in the part of weakness, the meaning of the proposed loss function does not sound technically. Especially, justification of adopting Siamse Net is unclear. If there is convincing reason of this adopting, please describe it theoretically or experimentally. Moreover, the experimental evaluations look unfair validations. """,1,1 midl20_143_1,"""-In the methods section, the authors claim to describe their model. However, all we have is the description of a U-Net. Any modification was made to the U-Net? - In the results section, it is not clear why the authors chose only one fold? Furthermore, it is unclear by how much the results got improved by the proposed method - In the conclusion it is said '... augmented inference *may* dramatically improve...', does that mean that it sometimes work and sometimes not? Please be more clear.""",2,0 midl20_143_2,"""Contributions: The authors propose to do ensemble learning (1) to further improve dice score as well as ""accumulating"" the predictions of a single model over test-time augmentation (2) to improve outlier performance . The data used came from the CAMUS dataset, and the model is a U-Net, the same architecture used in the original CAMUS paper. Method: For contribution 1, the authors split the patients into 10 folds, kept two as the testing set, and trained eight separate models on the remaining folds, keeping a different separate fold as validation set for all the models. A different model was trained for the two views available for each patient, totaling 16 models trained. For contribution 2, the authors ""accumulate"" the predictions of a single model trained over a single fold by augmenting a test image 200 times via a combination of intensity modification, rotation and Gaussian noise. Results: For contribution 1, a box plot of dice distribution is reported over different structures, separated by view and phase. The results are shown for a single model against the proposed ensemble of models. For contribution 2, the dice score improvement for the accumulated result is reported for a single test image, as well as a qualitative assessment of the segmentation for the same image. Criticism: Ensemble learning is generally recognized as an easy way to improve results on virtually any task. However, it is not a cheap method and requires n-times the amount of memory and training time. In itself, the reviewer feels it cannot be considered an improvement of a method. In this particular case, as figure 1 shows, the ensembling can hardly be justified as improvements shown via box plot seem to be marginal, at a cost of 8 times the amount of memory. Contribution 2 seems to have significant improvements over the baseline, the authors' own U-Net trained on a single fold. However, test-time augmentation is another commonly used practice and the reviewer also feels it is not a novel idea in itself. Furthermore, it is unclear what ""accumulating"" means, whether it is taking the overlap of the 200 predictions of the noisy image, a threshold per pixel, or any other method. Finally, the reported results are vague and only from a single hand-picked outlier test image. Nothing can be confidently inferred from this result. While the two contributions are orthogonal, no results are reported on the application of the two contributions at the same time. Finally, only the dice score is reported, while the original CAMUS paper also reported Hausdorff distance and mean absolute distance. Conclusion: The paper does not present any novel idea for cardiac segmentation. Even though the presented article is a short paper, the article glosses over important details and fails to present meaningful results.""",1,0 midl20_143_3,"""- The paper is very clearly written and the methods clearly described. The method involves ensembling 8 U-net models, trained on different overlapping folds of the echocardiography data with on-the-fly augmentations, and then applying test time augmentation by introducing 200 rotation variations and averaging the (unrotated) predictions. - The data is split into 10 folds initially, where 2 are held out as test data. The 8 U-net models are trained on 7/8 remaining folds in rotation (with the remaining 1/8 held out for validation on each of these splits). The ensembled prediction is compared to a baseline U-net trained only on a single fold. This however is not a fair comparison, as the ensemble ultimately sees all the data from the 8 folds across the 8 trained models, so the baseline effectively learns from 12.5% fewer real training images. Nonetheless, it is well established that ensembling improves over single models, as also demonstrated in the paper. - Test time augmentation improves segmentation results compared to the baseline model too. It is unclear whether test time augmentation improves over the ensemble model without test time augmentation however. - Both ensembling and test time augmentation are well established approaches in the literature. There is limited novelty in the proposed work, although clear improvements over a U-net baseline are shown.""",2,0 midl20_143_4,"""The paper proposes to improve results of echocardiography imagery segmentation using model averaging and augmented inference. These ideas are not particularly novel, but have proven to be valuable in multiple recent studies. In particular, the authors claim that averaging the predictions from multiple models improves performance and avoid the spectacular failures the single model prediction may sometimes exhibit. Additionally, data augmentation at test time also improves the results making them more stable. The authors have trained and evaluated their method on data from the CAMUS dataset. This dataset is pretty large and the data variability observed there is sufficient to evaluate the generalisation capabilities of the method proposed by the authors. Unfortunately, I find that the evaluation is not complete. First of all the authors only compare a randomly picked model from their 8-fold cross validation strategy with the average of the 8 fold. Would be interesting to see how a single model performs compared to an average of 2, 3, 4,..., 8 models. More importantly, it would be very good to see how the average of different architectures would work. Additionally, the authors seem to state that test-time augmentation has been only done on one example, which is the one used for qualitative analysis and that is reported in figure. It would have been really great to see a formal comparison of the performance with and without test time augmentation for the whole test set. Importantly, the box plot visualisation of the results leaves too much to the imagination of the readers. It would have been much better to include a table with results. Through a table, it would have been possible to show results for more experiments, even though some visibility on outliers might be lost (compared to box plots). I have no doubt that the technique proposed in the paper is valuable. Given the length constraints of short papers I also understand the fact that the experimental evaluation is compact. I still think it could have been better, via a table and show different angles over the advantages brought by the proposed technique. """,2,0 midl20_144_1,"""This paper uses deep neural networks for Alzheimer's disease classification using model trained on diffusion weighted MRI. Paper utilized structural connectivity matrices. Findings of papers support that connnectomics information from diffusion MRI tractrography is useful for understanding biomarkers of Alzheimer's disease. This paper explores the idea of classifying Alzheimers data by using CNN. This is a very interesting topic in DL-based medical image processing, therefore perfectly fits in the scope of the conference, because human radiologists also consider taking aid from other resources. Unfortunately, this paper is making only very limited steps towards achieving its ambitious goal. People are using structural connectivity matrices since long in the literature. It has sort of become obsolete to have classifaciton done with this connectivity matrices. Where does the paper stands in terms of novelty is not clearly mentioned. Motivation to the problem is not clear from introduction. Perhaps, citations of more recent work utilizing structural connectivity matrices for Alzheimer's disease classification is necessary to motivate the readers. The topic is very relevant and interesting. The only reason I'm not recommending acceptance is lack of comparison with the literature. More citations of recent work is required along with comparisons with the proposed approach.""",2,1 midl20_144_2,"""The authors build a CNN classifier for AD/MCI/CN on ADNI data based on DTI connectivity data. The method has been published before in another application. Several validation experiments are performed to assess the robustness of the network and important nodes/regions using saliency maps. Classification performance is good. Method is very suited for this important application. Validation experiments are well-designed: both the ablation analysis and the saliency analysis are useful. The classification performance is placed into the context of the literature. It is not completely clear to me what the added value of this work is. No reference methods for classifcation performance are included. It would be valuable to know what the performance of for example a conventional classifier based on the same input data would be. Or, similarly the performance of a CNN classifier on more raw diffusion/tensor/tractography data. New insights based on validatoin experiments are very limited. Important application. Validation experiments are interesting and classification results are good. Unfortunatly, there is not much novelty in methodology or new insights resulting from the validation experiments. """,3,1 midl20_144_3,"""This paper introduces a modification of the existing BrainNetCNN for AD/MCI diagnosis with DW-MRI. Unlike the original BrainNetCNN, the authors exploited the nature of the adjacency matrix that represents the connections among regions by defining two 1D convolution filters in E2E. Further, it is also considered to use the regional volume features in order to reflect the difference in size among regions when constructing a connectivity matrix. The ablation-based analysis was also conducted to verify the validity of the work. It is of great interesting in the field to use deep learning methods for DW-MRI analysis. The idea of taking the regional volume information into the model combined with an adjacency matrix is reasonable. The technical novelty is minor and the descriptions on the proposed method is not clear. No comparison with other method(s), at least with BrainNetCNN is required. More rigorous analysis in ablation and saliency map extraction is expected. The technical novelty is minor and the description and experiments are insufficient. No comparison with other method(s), at least with BrainNetCNN is required. More rigorous analysis in ablation and saliency map extraction is expected.""",1,1