paper_id,title,keywords,abstract,meta_review 1,"""Domain adaptation model for retinopathy detection from cross-domain OCT images""","['domain adaptation', 'adversarial learning', 'OCT images', 'retinopathy detection']","""A deep neural network (DNN) can assist in retinopathy screening by automatically classifying patients into normal and abnormal categories according to optical coherence tomography (OCT) images. Typically, OCT images captured from different devices show heterogeneous appearances because of different scan settings; thus, the DNN model trained from one domain may fail if applied directly to a new domain. As data labels are difficult to acquire, we proposed a generative adversarial network-based domain adaptation model to address the cross-domain OCT images classification task, which can extract invariant and discriminative characteristics shared by different domains without incurring additional labeling cost. A feature generator, a Wasserstein distance estimator, a domain discriminator, and a classifier were included in the model to enforce the extraction of domain invariant representations. We applied the model to OCT images as well as public digit images. Results show that the model can significantly improve the classification accuracy of cross-domain images.""","""Three out of four reviewers provide positive ratings, mainly based on appreciating the application value of DA for OCT data. I agree that this is a reasonable validation/application, and finding novel application for DA is of value to MIDL community. The paper claims as ""according to our knowledge, there is no research about domain adaptation on retinopathy detection from OCT images."" In comments, R4 provided a reference ""Reducing image variability across OCT devices with unsupervised unpaired learning for improved segmentation of retina (Romo et al., 2019)"", but ignored by the authors. I suggest the authors to carefully check the literature and avoid over-claim. """ 2,"""How Distance Transform Maps Boost Segmentation CNNs: An Empirical Study""","['Distance transform maps', 'medical image segmentation', 'Convolutional neural networks', 'Signed distance function']","""Incorporating distance transform maps of ground truth into segmentation CNNs has been an interesting new trend in the last year. Despite many great works leading to improvements on a variety of segmentation tasks, the comparison among these methods has not been well studied. In this paper, our first contribution is to summarize the latest developments of these methods in the 3D medical segmentation field. The second contribution is that we systematically evaluated five benchmark methods on two representative public datasets. These experiments highlight that all the five benchmark methods can bring performance gains to baseline V-Net. However, the implementation details have a noticeable impact on the performance, and not all the methods hold the benefits in different datasets. Finally, we suggest the best practices and indicate unsolved problems for incorporating distance transform maps into CNNs, which we hope will be useful for the community. The codes and trained models are publicly available at pseudo-url .""","""The papers strengths have been praised unanimously by the reviewers: - Authors address a hot topic on how to improve medical image segmentation with distance transform maps. - The paper is well written and easy to follow with a clear take-home message (and limitations are acknowledged) - Results have been assessed on well-known public dataset are presented. - Code and models will be publicly released While reviewers had somehow identified the same strengths, the weaknesses are different depending to the reviewers, and not critical: limited number of datasets or models (only V-net), the selection criteria of methods for the survey, no theoretical insights to the results of comparisons - although this last is acknowledged as difficult to establish. The reviewer who was in favor of a weak reject noted that the results were somehow inconclusive. The authors replied that statistical tests had now been performed to support the results. Now that this issue has been addressed, I believe this paper can accepted.""" 3,"""Hybrid Apparent Diffusion Coefficient (HADC) Map""","['Prostate cancer detection', 'Hybrid ADC map', 'lesion-aware Cycle-GAN']","""Multiparametric MRI (mpMRI) is an established framework for prostate cancer assessment which includes T2-weighted magnetic resonance (T2w-MR) and diffusion weighted (DW) sequences. Low quality of Apparent Diffusion Coefficient (ADC) maps from the diffusion sequence can hinder such clinical assessment. Herein, we propose to generate Hybrid ADC (HADC) maps from high-quality T2w-MRI using lesion-aware cycle-consistent generative adversarial network (LA-CGAN). Our produced HADC maps contain anatomical information from T2w-MR, and high intra-prostatic contrast of cancerous vs normal tissue, similar to the acquired ADC. Initial results have satisfied the expert radiologist in producing HADC. This work can considerably improve the quality of mpMRI combined assessment for prostate cancer detection. ""","""I agree with the reviewers that the assumption that T2-weighted images contain information to generate ADC images is absurd. Synthesized images can reflect standard ADC values per tissue but not patient-specific values. Besides, Authors mention ""using our proposed LA-CGAN (Zhu et al., 2017)"", so not an anonymous submission. lack of quantitative results.""" 4,"""Addressing The False Negative Problem of MRI Reconstruction Networks by Adversarial Attacks and Robust Training""","['MRI Reconstruction', 'Adversarial Attack', 'Robust Training']","""Deep learning models have been shown to have success in reconstructing accelerated MRI, over traditional methods. However, it has been observed that these methods tend to miss the small features that are rare, such as meniscal tears, subchondral osteophyte, etc. This is a concerning finding as these small and rare features are the most relevant in clinical diagnostic settings. In this work, we propose a framework to find the worst-case false negatives by adversarially attacking the trained models and improve the models' ability to reconstruct the small features by robust training.""","""This paper proposes an adversarial attack strategy to overcome the problem with accelerated MRI models missing small, rare (but often important!) features. The paper is interesting, tackles an important problem, and reaches interesting conclusions on the source of the problem. The reviewers also point out that the paper is difficult to read at times; the authors are strongly encouraged to take the feedback from these reviews into account to make an even better camera ready paper.""" 5,"""A CNN-LSTM Architecture for Detection of Intracranial Hemorrhage on CT scans""","['Computed Tomography', 'Intracranial Hemorrhage', 'CNN', 'LSTM']","""We propose a novel method that combines a convolutional neural network (CNN) with a long short-term memory (LSTM) mechanism for accurate prediction of intracranial hemorrhage on computed tomography (CT) scans. The CNN plays the role of a slice-wise feature extractor while the LSTM is responsible for linking the features across slices. The whole architecture is trained end-to-end with input being an RGB-like image formed by stacking 3 different viewing windows of a single slice. We validate the method on the recent RSNA Intracranial Hemorrhage Detection challenge and on the CQ500 dataset. For the RSNA challenge, our best single model achieves a weighted log loss of 0.0529 on the leaderboard, which is comparable to the top 3 performances, almost all of which make use of ensemble learning. Importantly, our method generalizes very well: the model trained on the RSNA dataset significantly outperforms the 2D model, which does not take into account the relationship between slices, on CQ500. Our codes and models will be made public.""","""While there are some concerns regarding the limited novelty of the method, the majority of the reviewers find this paper of sufficient merit to justify acceptance. """ 6,"""View Classification and Object Detection in Cardiac Ultrasound to Localize Valves via Deep Learning""","['ultrasound', 'echocardiagram', 'view classification', 'object detection', 'valve localization', 'deep learning']","""Echocardiography provides an important tool for clinicians to observe the function of the heart in real time, at low cost, and without harmful radiation. Automated localization and classification of heart valves enables automatic extraction of quantities associated with heart mechanical function and related blood flow measurements. We propose a machine learning pipeline that uses deep neural networks for separate classification and localization steps. As the first step in the pipeline, we apply view classification to echocardiograms with ten unique anatomic views of the heart. In the second step, we apply deep learning-based object detection to both localize and identify the valves. Image segmentation based object detection in echocardiography has been shown in many earlier studies but, to the best of our knowledge, this is the first study that predicts the bounding boxes around the valves along with classification from 2D ultrasound images with the help of deep neural networks. Our object detection experiments suggest that it is possible to localize and identify multiple valves precisely. ""","""The paper receives a consistent reviewing rate of 'weak reject' from four reviewers with the main concern regarding the novelty. The AC seems no reason to overrule such a rate.""" 7,"""Spine intervertebral disc labeling using a fully convolutional redundant counting model""","['Deep learning', 'Keypoints detection', 'Spinal cord', 'MRI', 'Intervertebral disc']","""Labeling intervertebral discs is relevant as it notably enables clinicians to understand the relationship between a patient's symptoms (pain, paralysis) and the exact level of spinal cord injury. However, manually labeling those discs is a tedious and user-biased task which would benefit from automated methods. While some automated methods already exist for MRI and CT-scan, they are either not publicly available, or fail to generalize across various imaging contrasts. In this paper, we combine a Fully Convolutional Network (FCN) with inception modules to localize and label intervertebral discs. We demonstrate a proof-of-concept application in a publicly-available multi-center and multi-contrast MRI database (n=235 subjects). The code is publicly available at [URL will be added after the double blind review].""","""This paper is about intervertebral disc labeling. Reviewers are relatively diverse in their scores. What concerns me is that the authors might have neglected a bunch of related works, particularly from the well-known series of MICCAI CSI workshops. Solid comparisons are necessary in this case to prove the methods effectiveness.""" 8,"""Automatic Diagnosis of Pulmonary Embolism Using an Attention-guided Framework: A Large-scale Study""","['Deep learning', 'computer-aided diagnosis', 'pulmonary embolism', 'attention']","""Pulmonary Embolism (PE) is a life-threatening disorder associated with high mortality and morbidity. Prompt diagnosis and immediate initiation of therapeutic action is important. We explored a deep learning model to detect PE on volumetric contrast-enhanced chest CT scans using a 2-stage training strategy. First, a residual convolutional neural network (ResNet) was trained using annotated 2D images. In addition to the classification loss, an attention loss was added during training to help the network focus attention on PE. Next, a recurrent network was used to scan sequentially through the features provided by the pre-trained ResNet to detect PE. This combination allows the network to be trained using both a limited and sparse set of pixel-level annotated images and a large number of easily obtainable patient-level image-label pairs. We used 1,670 sparsely annotated studies and more than 10,000 labeled studies in our training. On a test set with 2,160 patient studies, the proposed method achieved an area under the ROC curve (AUC) of 0.812. The proposed framework is also able to provide localized attention maps that indicate possible PE lesions, which could potentially help radiologists accelerate the diagnostic process.""","""Based on the reviewer comments which are all positive about the methodology soundness, clinical value, large-scale dataset employed, and good presentation of the paper, I would like to recommend acceptance. The authors should address the issues raised by reviewers regarding comparison with state-of-the-art methods in the experiments.""" 9,"""On the effectiveness of GAN generated cardiac MRIs for segmentation""","['GAN', 'CNN', 'Deep Learning', 'cine-MRI', 'Heart']","""In this work, we propose a Variational Autoencoder (VAE) - Generative Adversarial Networks (GAN) model that can produce highly realistic MRI together with its pixel accurate groundtruth for the application of cine-MR image cardiac segmentation. On one side of our model is a Variational Autoencoder (VAE) trained to learn the latent representations of cardiac shapes. On the other side is a GAN that uses SPatially-Adaptive (DE)Normalization (SPADE) modules to generate realistic MR images tailored to a given anatomical map. At test time, the sampling of the VAE latent space allows to generate an arbitrary large number of cardiac shapes, which are fed to the GAN that subsequently generates MR images whose cardiac structure fits that of the cardiac shapes. In other words, our system can generate a large volume of realistic yet labeled cardiac MR images. We show that segmentation with CNNs trained with our synthetic annotated images gets competitive results compared to traditional techniques. We also show that combining data augmentation with our GAN-generated images lead to an improvement in the Dice score of up to 12 percent while allowing for better generalization capabilities on other datasets.""","""This paper describes a way to use VAE to produce more MR images, which in turn contribute to segmentation. Review comments are generally at borderline. The novelty and evaluation of the paper might be limited though.""" 10,"""Uncertainty-Aware Training of Neural Networks for Selective Medical Image Segmentation""",[],"""State-of-the-art deep learning based methods have achieved remarkable performance on medical image segmentation. Their applications in the clinical setting are, however, limited due to the lack of trustworthiness and reliability. Selective image segmentation has been proposed to address this issue by letting a DNN model process instances with high confidence while referring difficult ones with high uncertainty to experienced radiologists. As such, the model performance is only affected by the predictions on the high confidence subset rather than the whole dataset. Existing selective segmentation methods, however, ignore this unique property of selective segmentation and train their DNN models by optimizing accuracy on the entire dataset. Motivated by such a discrepancy, we present a novel method in this paper that considers such uncertainty in the training process to maximize the accuracy on the confident subset rather than the accuracy on the whole dataset. Experimental results using the whole heart and great vessel segmentation and gland segmentation show that such a training scheme can significantly improve the performance of selective segmentation. ""","""The reviewers agree that the proposed method is novel and interesting, and that the results are backed by good experimental results. The questions raised during teh review have been answered well. I thus recomend this paper be accepted.""" 11,"""Automated MRI based pipeline for glioma segmentation and prediction of grade, IDH mutation and 1p19q co-deletion""","['Glioma', 'IDH mutation', '1p19q co-deletion', 'deep learning', 'MRI']","""In the WHO glioma classification guidelines grade, IDH mutation and 1p19q co-deletion play a central role as they are important markers for prognosis and optimal therapy planning. Therefore, we propose a fully automatic, MRI based, 3D pipeline for glioma segmentation and classification. The designed segmentation network was a 3D U-Net achieving an average whole tumor dice score of 90%. After segmentation, the 3D tumor ROI is extracted and fed into the multi-task classification network. The network was trained and evaluated on a large heterogeneous dataset of 628 patients, collected from The Cancer Imaging Archive and BraTS 2019 databases. Additionally, the network was validated on an independent dataset of 110 patients retrospectively acquired at our University Hospital (UH). Classification AUC scores are 0.93, 0.94 and 0.82 on the TCIA test data and 0.94, 0.86 and 0.87 on the UH data for grade, IDH and 1p19q status respectively. ""","""There are three reviews all in favour of the paper. One brief evaluation is rating it as 'weak reject'. There were critical questions raised by several reviewers, i.e., about relation to prior work and - a very important one - about leakage between training and testing data (R1). The authors decided not to respond to it. This is a critical matter as the high performances are the major innovation of the paper. To this end, I would side with the critical reviewer and consider it as an application paper that can be accepted in case applications should be of particular interest at MIDL2020. Otherwise, I would rather consider it to be a weak reject.""" 12,"""Suggestive Labelling for Medical Image Analysis by Adaptive Latent Space Sampling""","['Deep Learning', 'Data Efficient', 'Medical Imaging']","""Supervised deep learning for medical imaging analysis requires a large amount of training samples with annotations (e.g. label class for classification task, pixel- or voxel-wised label map for medical segmentation tasks), which are expensive and time-consuming to obtain. During the training of a deep neural network, the annotated samples are fed into the network in a mini-batch way, where they are often regarded of equal importance. However, some of the samples may become less informative during training, as the magnitude of the gradient start to vanish for these samples. In the meantime, other samples of higher utility or hardness may be more demanded for the training process to proceed and require more exploitation. To address the challenges of expensive annotations and loss of sample informativeness, here we propose a novel training framework which adaptively selects informative samples that are fed to the training process. To evaluate the proposed idea, we perform an experiment on a medical image dataset IVUS for biophysical simulation task.""","""This short paper proposes an algorithm to select maximally useful batches for neural network training, based on the magnitude of the gradient as evaluated in a VAE latent space. While the idea is interesting, and appreciated by the reviewers, the method does not appear to outperform random sampling, and the authors also do not compare to other label-efficient methods such as active learning.""" 13,"""Prostate Cancer Semantic Segmentation by Gleason Score Group in mp-MRI with Self Attention Model on the Peripheral Zone""","['semantic segmentation', 'attention model', 'convolutional neural network', 'computer-aided detection', 'magnetic resonance imaging', 'prostate cancer']","""Multiparametric magnetic resonance imaging (mp-MRI) has shown promising results in the detection of prostate cancer (PCa). However, discriminating clinically significant (CS) from benign lesions is time demanding and challenging, even for experienced readers, especially when individual MR sequences yield conflicting findings. Computer aided detection (CADe) or diagnostic (CADx) systems based on standard or deep supervised machine learning have achieved high performance in assisting radiologists for this diagnostic binary detection task. We aim to go one step further in the diagnostic refinement by characterizing the aggressiveness of PCa lesions, assessed by the Gleason score (GS) group grading. This challenging problem has been very recently addressed from the deep learning perspective by a few groups. In this work, we propose a novel end-to-end multiclass deep network to jointly perform the segmentation of the peripheral prostate zone (PZ) as well as the detection and GS Group grading of PZ lesions. Our U-Net inspired architecture is constituted from a standard encoding part that first extracts the latent information from multichannel T2 weighted (T2w) and apparent diffusion coefficient (ADC) input images. This latent representation is then connected to two separate decoding branches : 1) the first one performs a binary PZ segmentation 2) the second branch uses this zonal prior as an attention gate for the detection and grading of PZ lesions. Performance of this model was evaluated on a dataset of 98 mp-MRI exams including 57 exams acquired on a 1.5 Tesla scanner (Symphony; Siemens, Erlangen, Germany) and 41 exams acquired on a 3 Tesla scanner (Discovery; General Electric, Milwaukee, USA). All patients underwent a prostatectomy after the MR exams. The prostatectomy specimens were analyzed a posteriori by an anatomopathologist thus providing the histological ground. A total of 132 lesions were delineated on the images, including 37, 47, 23, 16 and 9 lesions of GS 6, 3+4, 4+3, 8 and 9, respectively. All lesions with a GS > 6 were considered as clinically significant. The deep model was trained and validated with a 5-fold cross-validation. Performance of our model was compared to a U-Net baseline model to assess the impact of the self attention module on PCa detection. A free-response receiver operating (FROC) analysis was conducted to evaluate the performance in detecting CS lesions and to discriminate lesions of the different GS groups. Regarding the detection of CS lesions, our model achieves 75.78% sensitivity at 2.5 false positive per patient. Regarding the automatic GS group grading, the Cohen quadratic weighted kappa coefficient is 0.35, which is considered as a fair agreement and an improvement with regards to the baseline model. Our model reaches 78% and 18% sensitivity for GS 9 and GS 6 at 1 false positive per patient, respectively. Our method achieves good performance without requiring any prior manual region delineation in clinical practice. We show that the addition of the proposed self attention mechanism improves the CAD performance in comparison to the baseline model. ""","""The paper appears to not convincingly add much to existing work or related work that was mostly overlooked. Overall well written, the paper may have limited clinical applicability due to the restriction to specific cancer types. However, discussion has shown that the paper could lead to interesting discussions and further fruitful exchanges if presented at the conference. Clarification and inclusion of the points made by the reviewers are required for acceptance.""" 14,"""Chest X-Ray Pneumothorax Segmentation with the Multistep Post-Processing""","['Neural Network', 'Deep Learning', 'Segmentation', 'Medical Imaging']","""In this work we present recent results of the pneumothorax segmentation from the chest X-ray images. Pneumothorax may appear in case of dull chest injury, as a continuation of hidden problems with the lungs, or even more there could be no reason at all for finding. In several situations, lung collapse can turn out as serious threat to life. We propose new method which includes the chest X-ray image segmentation pipeline with the multistep conditioned post-processing. As the result, we demonstrate significant improvement compare to any strong baseline by reduction of the pneumothorax collapse regions which are missed out and of false positive detections. Our results indicate very high accuracy and strong robustness of the algorithm confirmed by corresponding efficiency on the two stage test dataset with a priori unknown and absolutely different distribution. Final Dice scores 0.8821 and 0.8614 for stage 1 and stage 2 test sets respectively were resulted in top 0.01% standing of the private leaderboard on Kaggle competition platform.""","""There appears to be good results but the paper itself is not of adequate quality. Also issues with anonymity.""" 15,"""Overview of Scanner Invariant Representations""","['Harmonization', 'diffusion MRI', 'Invariant Representation']","""Pooled imaging data from multiple sources is subject to bias from each source. Studies that do not correct for these scanner/site biases at best lose statistical power, and at worst leave spurious correlations in their data. Estimation of the bias effects is non-trivial due to the paucity of data with correspondence across sites, so called ""traveling phantom"" data, which is expensive to collect. Nevertheless, numerous solutions leveraging direct correspondence have been proposed. In contrast to this, Moyer et al. (2019) proposes an unsupervised solution using invariant representations, one which does not require correspondence and thus does not require paired images. By leveraging the data processing inequality, an invariant representation can then be used to create an image reconstruction that is uninformative of its original source, yet still faithful to the underlying structure. In the present abstract we provide an overview of this method.""","""All reviewers agree on the relevance of the work. Most major concerns are related to the fact that the submission is a review of recently published work, but this is actually a format encouraged by the conference.""" 16,"""On Direct Distribution Matching for Adapting Segmentation Networks""","['domain adaptation', 'unsupervised domain adaptation', 'semantic segmentation', 'direct distribution matching']","""Minimization of distribution matching losses is a principled approach to domain adaptation in the context of image classification. However, it is largely overlooked in adapting segmentation networks, which is currently dominated by adversarial models. We propose a class of loss functions, which encourage direct kernel density matching in the network-output space, up to some geometric transformations computed from unlabeled inputs. Rather than using an intermediate domain discriminator, our direct approach unifies distribution matching and segmentation in a single loss. Therefore, it simplifies segmentation adaptation by avoiding extra adversarial steps, while improving quality, stability and efficiency of training. We juxtapose our approach to state-of-the-art segmentation adaptation via adversarial training in the network-output space. In the challenging task of adapting brain segmentation across different magnetic resonance imaging (MRI) modalities, our approach achieves significantly better results both in terms of accuracy and stability. ""","""All reviewers are convinced by the scientific value and evaluation results about this paper. A clear acceptance. The final version should clarify the motivation of the domain adaptation setting and its advantages, discuss the limitation of application scope, clarify data splits of training/evaluation, as pointed out by reviewers. """ 17,"""Deep Metric Learning Network using Proxies for Chromosome Classification in Karyotyping Test""","['Karyotyping test', 'Karyotype', 'Chromosome', 'Metric learning', 'Proxy', 'Deep learning']","""In karyotyping, the classification of chromosomes is a tedious, complicated, and time-consuming process. It requires extremely careful analysis of chromosomes by well-trained cytogeneticists. To assist cytogeneticists in karyotyping, we introduce Proxy-ResNeXt-CBAM which is a metric learning based network using proxies with a convolutional block attention module (CBAM) designed for chromosome classification. RexNeXt-50 is used as a backbone network. To apply metric learning, the fully connected linear layer of the backbone network (ResNeXt-50) is removed and is replaced with CBAM. The similarity between embeddings, which are the outputs of the metric learning network, and proxies are measured for network training. Proxy-ResNeXt-CBAM is validated on a public chromosome image dataset, and it achieves an accuracy of 95.86%, a precision of 95.87%, a recall of 95.9%, and an F-1 score of 95.79%. Proxy-ResNeXt-CBAM which is the metric learning network using proxies outperforms the baseline networks. In addition, the results of our embedding analysis demonstrate the effectiveness of using proxies in metric learning for optimizing deep convolutional neural networks. As the embedding analysis results show, Proxy-ResNeXt-CBAM obtains a 94.78% Recall@1 in image retrieval, and the embeddings of each chromosome are well clustered according to their similarity. ""","""All three reviewers seem to agree on the fact that the results presented by the paper are promising and outperform the results from recently published baselines on a public chromosome classification dataset. However, the main issue of the paper is a lack of novelty since main contributions (deep metric learning with proxy and CBAM ) have been proposed before. In addition, the paper shows little motivation for deep metric learning with proxy and CBAM. The rebuttal tries to address the motivation issues mentioned above, but not very successfully given that there is a very pragmatic explanation without any rational decision process that can justify the use of deep metric learning with proxy and CBAM. One reviewer also mentions that the paper does not explain the objective functions and the whole training procedure. Given these issues and a rebuttal that did not answered the reviewers' questions, I do not recommend this paper for acceptance.""" 18,"""Interpreting Chest X-rays via CNNs that Exploit Hierarchical Disease Dependencies and Uncertainty Labels""","['Chest X-ray', 'Uncertainty label', 'Label smoothing']","""We present a new approach based on deep convolutional neural networks (CNNs) for predicting the presence of 14 common thoracic diseases and observations. A strong set of CNNs are trained on over 200,000 chest X-ray images provided by CheXpert - a large scale chest X-ray dataset. In particular, dependencies among abnormality labels and uncertain samples are fully exploited during the training and inference stages. Experiments indicate that the proposed method achieves a mean area under the curve (AUC) of 0.940 in predicting 5 selected pathologies. To the best of our knowledge, this is the highest AUC score yet reported on this dataset to date. Additionally, the proposed method is also evaluated on the independent test set of the CheXpert competition and reports a performance level comparable to practicing radiologists. Our obtained result ranks first on the CheXpert leaderboard at the time of writing this paper.""","""First of all, this paper does not follow the MIDL template, with missing the Abstract section. Major concerns from the reviewers lie in the unclear presentation of results and large overlapping with an arXiv paper. Nevertheless, I think that taking into account the structural dependencies of labels is interesting.""" 19,"""DIVA: Domain Invariant Variational Autoencoder""","['representation learning', 'generative modeling', 'domain generalization', 'invariance']","""We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain. We propose the Domain Invariant Variational Autoencoder (DIVA), a generative model that tackles this problem by learning three independent latent subspaces, one for the domain, one for the class, and one for any residual variations. We highlight that due to the generative nature of our model we can also incorporate unlabeled data from known or previously unseen domains. To the best of our knowledge this has not been done before in a domain generalization setting. This property is highly desirable in fields like medical imaging where labeled data is scarce. We experimentally evaluate our model on the rotated MNIST benchmark and a malaria cell images dataset where we show that (i) the learned subspaces are indeed complementary to each other, (ii) we improve upon recent works on this task and (iii) incorporating unlabelled data can boost the performance even further.""","""The reviewers generally agree that the contributions in DIVA are interesting, and despite several complaints, worthy of acceptance to MIDL. The initially skeptic reviewers appreciated the thorough and thoughtful responses and new experiments from the authors, which highlights a good review process. The authors should incorporate all the feedback from the authors, including the new results and the clarifications necessary to alleviate the confusing parts of the paper. In the camera ready, it should also be made clear that a preliminary version of this work was already accepted at an ICLR workshop, what the differences are, and make it clear that this is the correct version to be referred to. Also, please change the description of the paper published in August of last year (Ruichu et al, brought up by reviewer #2) as 'contemporary'. The MIDL deadline was 6 months after this, it seems like a stretch.""" 20,"""Skull-RCNN: A CNN-based network for the skull fracture detection""","['Convolutional neural networks', 'Skull fracture', 'Object detection', 'Skeletonization']","""Skull fractures, following head trauma, may bring several complications and cause epidural hematomas. Therefore, it is of great significance to locate the fracture in time. However, the manual detection is time-consuming and laborious, and the previous studies for the automatic detection could not achieve the accuracy and robustness for clinical application. In this work, based on the Faster R-CNN, we propose a novel method for more accurate skull fracture detection results, and we name it as the Skull R-CNN. Guiding by the morphological features of the skull, a skeleton-based region proposal method is proposed to make candidate boxes more concentrated in key regions and reduced invalid boxes. With this advantage, the region proposal network in Faster R-CNN is removed for less computation. On the other hand, a novel full resolution feature network is constructed to obtain more precise features to make the model more snesetive to small objects. Experiment results showed that most of skull fractures could be detected correctly by the proposed method in a short time. Compared to the previous works on the skull fracture detecion, Skull R-CNN significantly reduces the less false positives, and keeps a high sensitivity.""","""I agree with the reviewers that the paper is interesting and that the addressed problem is clinically important and technically difficult. The authors have performed thorough revision and addressed issues raised by the reviewers. I appreciate that the authors changed the experimental settings to address the major issue of data separation and provided updated results. As pointed by the reviewers, the paper contains numerous language issues that hamper the reading and understanding of the work. Hence, I strongly advise the authors to very carefully check the language. """ 21,"""A Keypoint-based Morphological Signature for Large-scale Neuroimage Analysis""",[],"""We present an image keypoint-based morphological signature that can be used to efficiently assess the pair-wise whole-brain similarity for large MRI datasets. Similarity is assessed via Jaccard-like measure of set overlap based on the proportion of keypoints shared by an image pair, which may be evaluated in O(N log N) computational complexity given a set of N images using fast nearest neighbor indexing. Image retrieval experiments combine four large public neuroimage datasets including the Human Connectome Project (HCP), the Alzheimer's Disease Neuroimaging Initiative (ADNI) and the Open Access Series of Imaging Studies (OASIS), for a total of N=7536 T1-weighted MRIs of 3334 unique subjects. Our method identifies all pairs of same-subjects images based on a simple threshold, and revealed a number of previously unknown subject labeling errors.""","""This is a good paper, unfortunately it is out of the scope if MIDL since no deep learning is used in the proposed method.""" 22,"""Multiple resolution residual network for automatic thoracic organs-at-risk segmentation from CT""","['Multiple residual feature streams', 'thoracic normal organs', 'AAPM thoracic grand challenge dataset']","""We implemented and evaluated a multiple resolution residual network (MRRN) for multiple normal organs-at-risk (OAR) segmentation from computed tomography (CT) images for thoracic radiotherapy treatment (RT) planning. Our approach simultaneously combines feature streams computed at multiple image resolutions and feature levels through residual connections. The feature streams at each level are updated as the images are passed through various feature levels. We trained our approach using 206 thoracic CT scans of lung cancer patients with 35 scans held out for validation to segment the left and right lungs, heart, esophagus, and spinal cord. This approach was tested on the 60 CT scans from the open-source AAPM Thoracic Auto-Segmentation Challenge dataset. Performance was measured using the Dice Similarity Coefficient (DSC). Our approach outperformed the best-performing method in the grand challenge for hard-to-segment structures like the esophagus and achieved comparable results for all other structures. Median DSC using our method was 0.97 (interquartile range [IQR]: 0.97-0.98) for the left and right lungs, 0.93 (IQR: 0.93-0.95) for the heart, 0.78 (IQR: 0.76-0.80) for the esophagus, and 0.88 (IQR: 0.86-0.89) for the spinal cord. ""","""Strengths: This paper is very well organized and written. The proposed approach is not novel, but is quite decent - the contribution is a well-validated application. A public dataset is used with comparison w.r.t. other approaches. Weaknesses: - There is a lack of technical details on how the residual connections are build, on the input/output size, on the upsampling path. - Out of 5 organs, the proposed approach is superior for only one organ. For the rest, it seems comparable to the other methods. So the results are reasonable, however not outstanding. The Weak Reject ratings are based mainly on the lack of technical details and lack of originality. Regarding lack of originality, the paper is a well-validated application and acknowledged as such by the authors. Regarding the lack of technical details, it is also noted by the reviewers who accepted the paper. Since the major argument in favor of weak reject is the lack of details - which I believe can be addressed by the authors - my recommendation is towards a weak acceptance of this paper.""" 23,"""Extending Unsupervised Neural Image Compression With Supervised Multitask Learning""","['Neural image compression', 'supervised multitask learning', 'histopathology']","""We focus on the problem of training convolutional neural networks on gigapixel histopathology images to predict image-level targets. For this purpose, we extend Neural Image Compression (NIC), an image compression framework that reduces the dimensionality of these images using an encoder network trained unsupervisedly. We propose to train this encoder using supervised multitask learning (MTL) instead. We applied the proposed MTL NIC to two histopathology datasets and three tasks. First, we obtained state-of-the-art results in the Tumor Proliferation Assessment Challenge of 2016 (TUPAC16). Second, we successfully classified histopathological growth patterns in images with colorectal liver metastasis (CLM). Third, we predicted patient risk of death by learning directly from overall survival in the same CLM data. Our experimental results suggest that the representations learned by the MTL objective are: (1) highly specific, due to the supervised training signal, and (2) transferable, since the same features perform well across different tasks. Additionally, we trained multiple encoders with different training objectives, e.g. unsupervised and variants of MTL, and observed a positive correlation between the number of tasks in MTL and the system performance on the TUPAC16 dataset.""","""There is a high demand in histopathology image analysis to reduce or compress the (originally very large) image size. This paper deals with improving the representations learned by Neural Image Compression (NIC) algorithms via multitask supervised learning. Results are provided on 2 datasets and 3 tasks. The reviewers have collectively acknowledged the paper as well-written, addressing a hot topic in histopathology image analysis. The work is of high quality with excellent analysis of related works, soundness of the proposed approach, extensive and well-conducted validation, and finally conclusive results leading to a clear impact of the proposed method. Weaknesses differ depending on the reviewer: they relate primarily to the clarity of the implementation details, rationale behind some choices (with regard to the tasks at hand), additional leaderboard entries, additional results or missing analysis. The authors have replied precisely and made the required clarifications, and for most remarks, have updated the manuscript to clarify the questions. In some cases, they left the questions out their manuscript and carefully justified why. Given both the original reviewers rating and their associated confidence level, as well as the authors rebuttal, I strongly recommend this paper to be accepted.""" 24,"""Automatic segmentation of stroke lesions in non-contrast computed tomography with convolutional neural networks""","['stroke', 'computed tomography', 'segmentation', 'deep learning', 'CNN']","""Manual lesion segmentation for non-contrast computed tomography (NCCT), a common modality for volumetric follow-up assessment of ischemic strokes, is time-consuming and subject to high inter-observer variability. Our approach uses a combination of a 3D convolutional neural network (CNN) combined with post-processing methods. A total of 272 multi-center clinical NCCT datasets were used: 204 for CNN training, 48 for validation and developing post-processing methods, and 20 for testing. The testing datasets were from centers that did not contribute to the training and validation sets, and were segmented by two neuroradiologists. We achieved a median Dice score of 0.63, which was significantly improved to 0.66 with post-processing. The automatically segmented lesion volumes were not significantly different from the lesion volumes determined by the two manual observers. As the model was trained on datasets from multiple centers, it is broadly applicable. ""","""This paper shows promising results when applying developed architecture to imaging types usually not deemed sufficiently informative. Despite the limited testing and validation pool, the statistics appear well performed.""" 25,"""Uncertainty-based Graph Convolutional Networks for Organ Segmentation Refinement""","['Organ segmentation refinement', 'cnn uncertainty', 'gcn', 'semi-supervised']","""Organ segmentation in CT volumes is an important pre-processing step in many computer assisted intervention and diagnosis methods. In recent years, convolutional neural networks have dominated the state of the art in this task. However, since this problem presents a challenging environment due to high variability in the organ's shape and similarity between tissues, the generation of false negative and false positive regions in the output segmentation is a common issue. Recent works have shown that the uncertainty analysis of the model can provide us with useful information about potential errors in the segmentation. In this context, we proposed a segmentation refinement method based on uncertainty analysis and graph convolutional networks. We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem that is solved by training a graph convolutional network. To test our method we refine the initial output of a 2D U-Net. We validate our framework with the NIH pancreas dataset and the spleen dataset of the medical segmentation decathlon. We show that our method outperforms the state-of-the art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen, with respect to the original U-Net's prediction. Finally, we discuss the results and current limitations of the model for future work in this research direction. For reproducibility purposes, we make our code publicly available""","""All the reviewers recommended acceptance of this work. After reading their comments and the answer given by the authors in the rebuttal, I think this work can be accepted for publication at MIDL. Please, when submitting the Camera Ready version, take into account the suggestions made by the reviewers.""" 26,"""Data-Driven Prediction of Embryo Implantation Probability Using IVF Time-lapse Imaging""","['Deep Learning', 'In Vitro Fertilization', 'Video Classification']","""The process of fertilizing a human egg outside the body in order to help those suffering from infertility to conceive is known as in vitro fertilization (IVF). Despite being the most effective method of assisted reproductive technology (ART), the average success rate of IVF is a mere 20-40%. One step that is critical to the success of the procedure is selecting which embryo to transfer to the patient, a process typically conducted manually and without any universally accepted and standardized criteria. In this paper we describe a novel data-driven system trained to directly predict embryo implantation probability from embryogenesis time-lapse imaging videos. Using retrospectively collected videos from 272 embryos, we demonstrate that, when compared to an external panel of embryologists, our algorithm results in a 12% increase of positive predictive value and a 29% increase of negative predictive value. ""","""There is consensus that this paper has exciting results and would generate good discussion at the conference.""" 27,"""Efficient Out-of-Distribution Detection in Digital Pathology Using Multi-Head Convolutional Neural Networks""","['uncertainty estimation', 'digital pathology', 'multi-head ensembles']","""Successful clinical implementation of deep learning in medical imaging depends, in part, on the reliability of the predictions. Specifically, the system should be accurate for classes seen during training while providing calibrated estimates of uncertainty for abnormalities and unseen classes. To efficiently estimate predictive uncertainty, we propose the use of multi-head convolutional neural networks (M-heads). We compare its performance to related and more prevalent approaches, such as deep ensembles, on the task of out-of-distribution (OOD) detection. To this end, we evaluate models, trained to discriminate normal lymph node tissue from breast cancer metastases, on lymph nodes containing lymphoma. We show the ability to discriminate between the in-distribution lymph node tissue and lymphoma by evaluating the AUROC based on the uncertainty signal. Here, the best performing multi-head CNN (91.7) outperforms both Monte Carlo dropout (86.5) and deep ensembles (86.8). Furthermore, we show that the meta-loss function of M-heads improves OOD detection in terms of AUROC from 87.4 to 88.7.""","""Thanks for the reviewers taking time to comment and discuss. The authors' extensive rebuttal successfully convinced the two reviewers (with high-confidence evaluation) in the discussion phase. I also think that the topic of out-of-distribution detection will be interesting for the MIDL community. So an acceptance is suggested. Authors should revise the final version by addressing the issues as discussed with the reviewers.""" 28,"""Motion Recovery from Radon Transformed Image Using Neural Networks""","['motion correction', 'CNN', 'convolutional enoder decoder', 'inverse radon transform']","""In this paper we address motion correction in image reconstruction. Patient motion, including breathing, is a persistent problem in medical imaging. Cardiac motion is particularly enigmatic and often ignored, except in high-speed imaging modalities like MRI. Motions may create artifacts to the extent that the image may have to be discarded. Since the beginning of medical imaging, motion correction remained an important subcategory of research. Motion corrections may be applied during or after tomographic image reconstruction. In this work we considered motion as a Gaussian blur at the image level. Discrete radon transform is applied to the blurred images to create corresponding noisy sinograms that mimic real imaging scenario. Our deep learning based tool recovered accurately (1) the blurring functions with an artificial convolutional neural network (CNN) directly from the sinograms, and (2) successfully reconstructed (inverse radon transformed) the noise-free images utilizing an adaptation of the convolutional encoder-decoder network (CED) from the literature. Our work shows that neural networks are not only capable of eliminating systematic noise in reconstruction but can also recover the noise model. ""","""The paper gets bad review and the four reviewers rate it as 'strong reject'. The concerns are as follows: it is very preliminary with just a proof-of-concept using only the toy experiment (while it is easy to generate some simulated images based real data). Also the model assumption is questioned too. The AC's opinion is consistent with the reviewers.""" 29,"""Deep learning approach to describing and classifying fungi microscopic images""","['mycological diagnosis', 'microscopic images', 'deep learning', 'bag-of-words']","""Preliminary diagnosis of fungal infections can rely on microscopic examination. However, in many cases, it does not allow unambiguous identification of the species due to their visual similarity. Therefore, it is usually necessary to use additional biochemical tests. That involves additional costs and extends the identification process up to 10 days. Such a delay in the implementation of targeted therapy may be grave in consequence as the mortality rate for immunosuppressed patients is high. In this paper, we apply a machine learning approach based on deep neural networks and bag-of-words to classify microscopic images of various fungi species. Our approach makes the last stage of biochemical identification redundant, shortening the identification process by 2-3 days, and reducing the cost of the diagnosis.""","""The reviewers disagree about this paper. It appears that the motivation is well described and considered to be important by the reviewers. Another positive aspect of the paper is the choice of fungal types to be characterised by the model, which seem to replicate well the distribution of common fungal infections. However some reviewers identified lack of details about the implementation and the model training process. In particular, how are the patches generated from the fungi images and how many patches are used for training? Also, it is surprising that AlexNet performs better than InceptionV3 and that bag-of-words with InceptionV3 performs much worse than InceptionV3. Another issue identified by the reviewers is that the proposed model seems to be unable to provide an unambiguous species identification because of their visual similarity, which can be potentially devastating for the viability of the method. The paper has pros and cons, but given the usefulness of the application and the presented results, I am leaning towards acceptance. """ 30,"""Knee Injury Detection using MRI with Efficiently-Layered Network (ELNet)""","['Knee Diagnosis', 'MRI', 'Deep Learning', 'ACL Tear', 'Meniscus Tear', 'Knee Injury', 'Medical Triage']","""Magnetic Resonance Imaging (MRI) is a widely-accepted imaging technique for knee injury analysis. Its advantage of capturing knee structure in three dimensions makes it the ideal tool for radiologists to locate potential tears in the knee. In order to better confront the ever growing workload of musculoskeletal (MSK) radiologists, automated tools for patients' triage are becoming a real need, reducing delays in the reading of pathological cases. In this work, we present the Efficiently-Layered Network (ELNet), a convolutional neural network (CNN) architecture optimized for the task of initial knee MRI diagnosis for triage. Unlike past approaches, we train ELNet from scratch instead of using a transfer-learning approach. The proposed method is validated quantitatively and qualitatively, and compares favorably against state-of-the-art MRNet while using a single imaging stack (axial or coronal) as input. Additionally, we demonstrate our model's capability to locate tears in the knee despite the absence of localization information during training. Lastly, the proposed model is extremely lightweight ( pseudo-formula 1MB) and therefore easy to train and deploy in real clinical settings.""","""The reviewers agree that this is a well-written piece of work. The authors achieve promising results while proposing a very lightweight model architecture with regard to memory consumption. There is some concern about the lack of ablation studies that could show the benefit of some of the design choices made by the authors. However, the use of some normalization layers typically used in different fields of deep learning and the application of blur pooling are showing interesting results in this application and warrant further discussion. The authors are encouraged to add the missing ablation experiments to their final paper version.""" 31,"""4D Semantic Cardiac Magnetic Resonance Image Synthesis on XCAT Anatomical Model""","['4D semantic image synthesis', 'cardiac magnetic resonance imaging', 'XCAT phantom', 'generative adversarial network', 'SPADE GAN']","""We propose a hybrid controllable image generation method to synthesize anatomically meaningful 3D+t labeled Cardiac Magnetic Resonance (CMR) images. Our hybrid method takes the mechanistic 4D eXtended CArdiac Torso (XCAT) heart model as the anatomical ground truth and synthesizes CMR images via a data-driven Generative Adversarial Network (GAN). We employ the state-of-the-art SPatially Adaptive De-normalization (SPADE) technique for conditional image synthesis to preserve the semantic spatial information of ground truth anatomy. Using the parameterized motion model of the XCAT heart, we generate labels for 25 time frames of the heart for one cardiac cycle at 18 locations for the short axis view. Subsequently, realistic images are generated from these labels, with modality-specific features that are learned from real CMR image data. We demonstrate that style transfer from another cardiac image can be accomplished by using a style encoder network. Due to the flexibility of XCAT in creating new heart models, this approach can result in a realistic virtual population to address different challenges the medical image analysis research community is facing such as expensive data collection. Our proposed method has a great potential to synthesize 4D controllable CMR images with annotations and adaptable styles to be used in various supervised multi-site, multi-vendor applications in medical image analysis.""","""This paper presents work on how to combine data-driven and physics-driven trainning of deep learning algorithms to improve the quality of image synthesis with less annotations. This is a relevant topic and while the manuscript received some critiques, the authors have done a good job in responding to them and have proposed specific improvements they will do in the final manuscript. I think this paper will generate interesting discussions at MIDL and is worth presenting. """ 32,"""Chester: A Web Delivered Locally Computed Chest X-Ray Disease Prediction System""","['Chest X-Ray', 'Radiology', 'Deep Learning']","""In order to bridge the gap between Deep Learning researchers and medical professionals we develop a very accessible free prototype system which can be used by medical professionals to understand the reality of Deep Learning tools for chest X-ray diagnostics. The system is designed to be a second opinion where a user can process an image to confirm or aid in their diagnosis. Code and network weights are delivered via a URL to a web browser (including cell phones) but the patient data remains on the users machine and all processing occurs locally. This paper discusses the three main components in detail: out-of-distribution detection, disease prediction, and prediction explanation. The system open source and freely available.""","""This paper presents a web-based tool for chest x-ray diagnosis. The paper is interesting from the point of view of illustrating and discussing with the MIDL community what it will take for our methods to be widely and freely applicable to practitioners, beyond our own collaborating clinicians.""" 33,"""Vispi: Automatic Visual Perception and Interpretation of Chest X-rays""","['Medical Image Report Generation', 'Disease Classification and Localization', 'Visual Perception', 'Attention', 'Deep Learning']","""Computer-aided medical image visual perception and interpretation with deep learning remain a challenging task, due to the lack of high-quality annotated image-report pairs and tailor-made generative models for sufficient extraction and exploitation of localized semantic features associated with abnormalities. To tackle these challenges, we present Vispi, an automatic medical image interpretation system, which first annotates an image via classifying and localizing common thoracic diseases with visual support and then followed by report generation from an attentive LSTM model. Analyzing an open IU X-ray dataset, we demonstrate a superior performance of Vispi in disease classification, localization and report generation using automatic performance evaluation metrics ROUGE and CIDEr.""","""This paper presents a pipeline for automatic interpretation of medical imaging and learning-based diagnosis. On the one hand, the paper tackles an important and challenging problem, but on the other hand, the validation is very limited and novelty over previous work is unclear. This is therefore a borderline paper.""" 34,"""Direct estimation of fetal head circumference from ultrasound images based on regression CNN""","['CNN', 'deep regression', 'ultrasound images', 'fetus head circumference']","""The measurement of fetal head circumference (HC) is performed throughout the pregnancy as a key biometric to monitor fetus growth. This measurement is performed on ultrasound images, via the manual fitting of an ellipse. The operation is operator-dependent and as such prone to intra and inter-variability error. There have been attempts to design automated segmentation algorithms to segment fetal head, especially based on deep encoding-decoding architectures. In this paper, we depart from this idea and propose to leverage the ability of convolutional neural networks (CNN) to directly measure the head circumference, without having to resort to handcrafted features or manually labeled segmented images. The intuition behind this idea is that the CNN will learn itself to localize and identify the head contour. Our approach is experimented on the public HC18 dataset, that contains images of all trimesters of the pregnancy. We investigate various architectures and three losses suitable for regression. While room for improvement is left, encouraging results show that it might be possible in the future to directly estimate the HC - without the need for a large dataset of manually segmented ultrasound images. This approach might be extended to other applications where segmentation is just an intermediate step to the computation of biomarkers.""","""The authors proposed to use a regression CNN to obtain a direct estimation of fetal head circumference from US images. The reviewers agree on the weak acceptance of this paper. """ 35,"""Robust Image Segmentation Quality Assessment """,[],"""Deep learning based image segmentation methods have achieved great success, even having human-level accuracy in some applications. However, due to the black box nature of deep learning, the best method may fail in some situations. Thus predicting segmentation quality without ground truth would be very crucial especially in clinical practice. Recently, people proposed to train neural networks to estimate the quality score by regression. Although it can achieve promising prediction accuracy, the network suffers robustness problem, e.g. it is vulnerable to adversarial attacks. In this paper, we propose to alleviate this problem by utilizing the difference between the input image and the reconstructed image, which is conditioned on the segmentation to be assessed, to lower the chance to overfit to the undesired image features from the original input image, and thus to increase the robustness. Results on ACDC17 dataset demonstrated our method is promising.""","""All the authors agree that the method has some potentials and the idea is interesting. I think that it would be interesting to be included as a short paper in MIDL.""" 36,"""Deblurring for spiral real-time MRI using convolutional neural networks""","['Artifact correction', 'CNN', 'deblurring', 'off-resonance', 'real-time MRI']","""Spiral acquisitions are preferred in real-time MRI because of their time efficiency. A fundamental limitation of spirals is image blurring due to off-resonance, which degrades image quality significantly at air-tissue boundaries. Here, we demonstrate a simple CNN-based deblurring method for spiral real-time MRI of human speech production. We show the CNN-based deblurring is capable of restoring blurred vocal tract tissue boundaries, without a need for exam-specific field maps. Deblurring performance is superior to a current auto-calibrated method, and slightly inferior to ideal reconstruction with perfect knowledge of the field maps. ""","""This paper applies CNN to do deblurring for spiral real-time MR imaging. The current presentation is okay for a short paper. """ 37,"""An ENAS Based Approach for Constructing Deep Learning Models for Breast Cancer Recognition from Ultrasound Images""","['Efficient Neural Architecture Search', 'Ultrasound Image', 'Breast Cancer', 'Deep Learning']","""Deep Convolutional Neural Networks (CNN) provides an ""end-to-end"" solution for image pattern recognition with impressive performance in many areas of application including medical imaging. Most CNN models of high performance use hand-crafted network architectures that require expertise in CNNs to utilise their potentials. In this paper, we applied the Efficient Neural Architecture Search (ENAS) method to find optimal CNN architectures for classifying breast lesions from ultrasound (US) images. Our empirical study with a dataset of 524 US images shows that the optimal models generated by ENAS achieve an average accuracy of 89.3%, surpassing other hand-crafted alternatives. Furthermore, the models are simpler in complexity and more efficient. Our study demonstrates that the ENAS approach to CNN model design is a promising direction for classifying ultrasound images of breast lesions.""","""While this paper has received a mix of rates (2 weak accepts and 2 weak rejects), l tend towards rating this paper as weak accept. Nevertheless, I believe that authors need to address some important questions in the near future. As a summary, these are the main concerns of this work: - Authors do not take into consideration the characteristics of ultrasound images. The motivation to augment this dataset (i.e., rotating 90, 180 and 270 degrees) seems invalid. I agree with the reviewers that this can not only not help, but also hurt the performance of the method. - Authors are encouraged to include higher performing models in their evaluation to strength the findings and results of this work. - Furthermore, training on these models needs to be better detailed.""" 38,"""Accurate Detection of Out of Body Segments in Surgical Video using Semi-Supervised Learning""","['Surgical Intelligence', 'Semi-Supervised Learning', 'Deep Learning', 'Surgical Video Anonymization', 'Out of Body Detection']","""Large labeled datasets are an important precondition for deep learning models to achieve state-of-the-art results in computer vision tasks. In the medical imaging domain, privacy concerns have limited the rate of adoption of artificial intelligence methodologies into clinical practice. To alleviate such concerns, and increase comfort levels while sharing and storing surgical video data, we propose a high accuracy method for rapid removal and anonymization of out-of-body and non-relevant surgery segments. Training a deep model to detect out-of-body and non-relevant segments in surgical videos requires suitable labeling. Since annotating surgical videos with per-second relevancy labeling is a tedious task, our proposed framework initiates the learning process from a weakly labeled noisy dataset and iteratively applies Semi-Supervised Learning (SSL) to re-annotate the training data samples. Evaluating our model, on an independent test set, shows a mean detection accuracy of above 97% after several training-annotating iterations. Since our final goal is achieving out-of-body segments detection for anonymization, we evaluate our ability to detect these segments at a high demanding recall of 97%, which leads to a precision of 83.5%. We believe this approach can be applied to similar related medical problems, in which only a coarse set of relevancy labels exists, currently limiting the possibility for supervision training.""","""The reviewers agree that the application presented in this manuscript is of interest but alos point to a lack of methodological contributions and to some shortcoming in the evaluation of teh method. The authors have nonethless provided interesting responses in their rebuttal and the paper might lead to interesting discussion at the conference.""" 39,"""Deep learning-based parameter mapping for joint relaxation and diffusion tensor MR Fingerprinting""","['Magnetic Resonance Fingerprinting', 'Convolutional Neural Network', 'Image Reconstruction', 'Diffusion Tensor', 'Multiple Sclerosis']","""Magnetic Resonance Fingerprinting (MRF) enables the simultaneous quantification of multiple properties of biological tissues. It relies on a pseudo-random acquisition and the matching of acquired signal evolutions to a precomputed dictionary. However, the dictionary is not scalable to higher-parametric spaces, limiting MRF to the simultaneous mapping of only a small number of parameters (proton density, T1 and T2 in general). Inspired by diffusion-weighted SSFP imaging, we present a proof-of-concept of a novel MRF sequence with embedded diffusion-encoding gradients along all three axes to efficiently encode orientational diffusion and T1 and T2 relaxation. We take advantage of a convolutional neural network (CNN) to reconstruct multiple quantitative maps from this single, highly undersampled acquisition. We bypass expensive dictionary matching by learning the implicit physical relationships between the spatiotemporal MRF data and the T1, T2 and diffusion tensor parameters. The predicted parameter maps and the derived scalar diffusion metrics agree well with state-of-the-art reference protocols. Orientational diffusion information is captured as seen from the estimated primary diffusion directions. In addition to this, the joint acquisition and reconstruction framework proves capable of preserving tissue abnormalities in multiple sclerosis lesions.""","""While two reviewers are enthusiastic about the paper, I tend to agree with reviewer 1 who points out this contribution does not fit well within the MIDL domain - it's deep learning portion consists of using a straight up U-net. While MRF is certainly of interest to many in the MIDL community as evidenced by the previous MIDL papers listed by the authors in their rebuttal, each of those papers can be seen to have a more substantial deep learning component (and/or are on the more limited short paper track).""" 40,"""Deep learning-based retinal vessel segmentation with cross-modal evaluation""","['deep learning', 'retina', 'vessel segmentation', 'scanning laser ophthalmoscopy', 'fundus photography']","""This work proposes a general pipeline for retinal vessel segmentation on en-face images. The main goal is to analyse if a model trained in one of two modalities, Fundus Photography (FP) or Scanning Laser Ophthalmoscopy (SLO), is transferable to the other modality accurately. This is motivated by the lack of development and data available in en-face imaging modalities other than FP. FP and SLO images of four and two publicly available datasets, respectively, were used. First, the current approaches were reviewed in order to define a basic pipeline for vessel segmentation. A state-of-art deep learning architecture (U-net) was used, and the effect of varying the patch size and number of patches was studied by training, validating, and testing on each dataset individually. Next, the model was trained in either FP or SLO images, using the available datasets for a given modality combined. Finally, the performance of each network was tested on the other modality. The models trained on each dataset showed a performance comparable to the state-of-the art and to the inter-rater reliability. Overall, the best performance was observed for the largest patch size (256) and the maximum number of overlapped images in each dataset, with a mean sensitivity, specificity, accuracy, and Dice score of 0.89 pseudo-formula 0.05, 0.95 pseudo-formula 0.02, 0.95 pseudo-formula 0.02, and 0.73 pseudo-formula 0.07, respectively. Models trained and tested on the same modality presented a sensitivity, specificity, and accuracy equal or higher than 0.9. The validation on a different modality has shown significantly better sensitivity and Dice on those trained on FP.""","""This paper proposes a deep learning-based method for retinal vessel segmentation based on cross-modal learning. The rebuttal convinced most of the reviewers.""" 41,"""Effect of GAN augmented dataset size on deep learning-based ultrasound bone segmentation model training""","['Ultrasound bone segmentation', 'GAN', 'data augmentation']","""Recently, ultrasound imaging is increasingly being used as intra-operative imaging modality in osteolytic bone surgery due to its cost-effectiveness and radiation-free nature. Deep learning approaches have shown remarkable success in segmenting bone surface from ultrasound images. However, limited training dataset size has always hindered the success of deep learning approaches, which is especially evident in ultrasound bone segmentation since there is no publicly available quality dataset. To resolve the issue, in addition to standard data augmentation approaches, generative adversarial networks (GANs) have recently been used for generating augmented training samples. Although the addition of the generative approach in data augmentation is observed to have a positive effect on deep learning architecture's performance, the question about the effect of using a multi-fold of GAN generated training dataset remains unanswered. In this work, we have generated 14 fold of GAN-augmented training dataset and evaluated the performance of the network for successively increased dataset size. Our test results show that although the use of the GAN-augmented training dataset in addition to standard augmentation approaches helps perform the network better, the addition of multi-fold GAN-augmented datasets has no noticeable performance gain.""","""This paper investigate if the use of GAN-based data augmentation improves bone segmentation from an ultrasound image. The three reviewers consistently rate the paper as 'weak reject' due to reasons like weak evaluation, lack of motivation, etc. The AC agrees with the three reviewers and encourages the authors to follow the suggestions from the reviewers to improve the paper quality.""" 42,"""An Auxiliary Task for Learning Nuclei Segmentation in 3D Microscopy Images""","['machine learning', 'image analysis', 'instance segmentation', 'instance detection', 'nuclei segmentation', 'auxiliary training task']","""Segmentation of cell nuclei in microscopy images is a prevalent necessity in cell biology. Especially for three-dimensional datasets, manual segmentation is prohibitively time-consuming, motivating the need for automated methods. Learning-based methods trained on pixel-wise ground-truth segmentations have been shown to yield state-of-the-art results on 2d benchmark image data of nuclei, yet a respective benchmark is missing for 3d image data. In this work, we perform a comparative evaluation of nuclei segmentation algorithms on a database of manually segmented 3d light microscopy volumes. We propose a novel learning strategy that boosts segmentation accuracy by means of a simple auxiliary task, thereby robustly outperforming each of our baselines. Furthermore, we show that one of our baselines, the popular three-label model, when trained with our proposed auxiliary task, outperforms the recent StarDist-3D. As an additional, practical contribution, we benchmark nuclei segmentation against nuclei detection, i.e. the task of merely pinpointing individual nuclei without generating respective pixel-accurate segmentations. For learning nuclei detection, large 3d training datasets of manually annotated nuclei center points are available. However, the impact on detection accuracy caused by training on such sparse ground truth as opposed to dense pixel-wise ground truth has not yet been quantified. To this end, we compare nuclei detection accuracy yielded by training on dense vs. sparse ground truth. Our results suggest that training on sparse ground truth yields competitive nuclei detection rates. ""","""This paper presents a solid approach for nuclei segmentation using auxiliary task learning by introducing a detection problem. While not entirely new from a methodological point of view, reviewers agree that there is value in the specific approach and its validation on the proposed dataset.""" 43,"""Correlation via Synthesis: End-to-end Image Generation and Radiogenomic Learning Based on Generative Adversarial Network""","['Image synthesis', 'Radiogenomic learning', 'Multi-conditional GAN']","""Radiogenomic map linking image features and gene expression profiles has great potential for non-invasively identifying molecular properties of a particular type of disease. Conventionally, such map is produced in three independent steps: 1) gene-clustering to metagenes, 2) image feature extraction, and 3) statistical correlation between metagenes and image features. Each step is separately performed and relies on arbitrary measurements without considering the correlation among each other. In this work, we investigate the potential of an end-to-end method fusing gene code with image features to generate synthetic pathology image and learn radiogenomic map simultaneously. To achieve this goal, we develop a multi-conditional generative adversarial network (GAN) conditioned on both background images and gene expression code, synthesizing the corresponding image. Image and gene features are fused at different scales to ensure both the separation of pathology part and background, as well as the realism and quality of the synthesized image. We tested our method on non-small cell lung cancer (NSCLC) dataset. Results demonstrate that the proposed method produces realistic synthetic images, and provides a promising way to find gene-image relationship in a holistic end-to-end manner.""","""All the reviewers seem to agree that the ideas presented in the paper and the combination of image and genetic information are very interesting while the paper is clearly presented and written. I agree that the idea is very interesting and it can be also very interesting for the community. The authors addressed the comments of the reviewers properly and I encourage them to incorporate them in the final version of the paper to improve their final version.""" 44,"""Siamese Tracking of Cell Behaviour Patterns""","['Mitosis', 'cell collision', 'segmentation', 'cell tracking', 'Siamese tracker', 'U-Net', 'cell tracking challenge']","""Tracking and segmentation of biological cells in video sequences is a challenging problem, especially due to the similarity of the cells and high levels of inherent noise. Most machine learning-based approaches lack robustness and suffer from sensitivity to less prominent events such as mitosis, apoptosis and cell collisions. Due to the large variance in medical image characteristics, most approaches are dataset-specific and do not generalise well on other datasets. In this paper, we propose a simple end-to-end cascade neural architecture able to model the movement behaviour of biological cells and predict collision and mitosis events. Our approach uses U-Net for an initial segmentation which is then improved through processing by a siamese tracker capable of matching each cell along the temporal axis. By facilitating the re-segmentation of collided and mitotic cells, our method demonstrates its capability to handle volatile trajectories and unpredictable cell locations while being invariant to cell morphology. We demonstrate that our tracking approach achieves state-of-the-art results on PhC-C2DL-PSC and Fluo-N2DH-SIM+ datasets and ranks second on the DIC-C2DH-HeLa dataset of the cell tracking challenge benchmarks. ""","""Reviewers are in overall favor of this work. Despite some concerns about the individual processing stages not being overly original, their composing into a working pipeline seems to be non-trivial. Accompanied by the extensive experimental evaluation, there is good evidence that this contribution is worthwhile to be presented at MIDL. Also, authors went into great detail in explaining and reacting to critical reviewer comments regarding structuring and better explanations in parts of the work, which leads me to expect that the final paper has potential for a good MIDL contribution.""" 45,"""Mutual information based deep clustering for semi-supervised segmentation""","['Semantic segmentation', 'Semi-supervised learning', 'Deep clustering', 'Mutual information', 'Convolutional neural network']","""The scarcity of labeled data often limits the application of deep learning to medical image segmentation. Semi-supervised learning helps overcome this limitation by leveraging unlabeled images to guide the learning process. In this paper, we propose using a clustering loss based on mutual information that explicitly enforces prediction consistency between nearby pixels in unlabeled images, and for random perturbation of these images, while imposing the network to predict the correct labels for annotated images. Since mutual information does not require a strict ordering of clusters in two different cluster assignments, we propose to incorporate another consistency regularization loss which forces the alignment of class probabilities at each pixel of perturbed unlabeled images. We evaluate the method on three challenging publicly-available medical datasets for image segmentation. Experimental results show our method to outperform recently-proposed approaches for semi-supervised and yield a performance comparable to fully-supervised training.""","""The reviewers found the paper interesting, and overall there is probably enough support to accept the paper into MIDL. I also appreciate the thorough and clear answers by the authors. However, the reviewers noted very important issues that **need to be corrected** in the camera ready. Reviewer #1's comments about existing literature on semi-supervised segmentation was brushed away by the authors, which I find quite troubling. Papers mentioned by Reviewer #1, which might appear as ""data augmentation"" papers in the title and use a completely different methodology (not using MI), definitely tackle the same problem (semi-supervised segmentation) where they take advantage of very limited labelled data and a host of unlabelled data. Including a discussion of related semi-supervised segmentation work and placing the work in that context is important for the paper to be complete. Similarly, adding the Bortsova paper in the introduction is a requirement -- it did not only appear on arxiv at the end of the year, but rather was properly published at MICCAI 2019. The updated results with variance estimations also need to be in the paper. I also encourage the authors to also take all the other feedback and incorporate it in the paper. """ 46,"""Deep Reinforcement Learning for Organ Localization in CT""","['Organ localization', 'deep reinforcement learning', 'computed tomography']","""Robust localization of organs in computed tomography scans is a constant pre-processing requirement for organ-specific image retrieval, radiotherapy planning, and interventional image analysis. In contrast to current solutions based on exhaustive search or region proposals, which require large amounts of annotated data, we propose a deep reinforcement learning approach for organ localization in CT. In this work, an artificial agent is actively self-taught to localize organs in CT by learning from its asserts and mistakes. Within the context of reinforcement learning, we propose a novel set of actions tailored for organ localization in CT. Our method can use as a plug-and-play module for localizing any organ of interest. We evaluate the proposed solution on the public VISCERAL dataset containing CT scans with varying fields of view and multiple organs. We achieved an overall intersection over union of 0.63, an absolute median wall distance of 2.25 mm and a median distance between centroids of 3.65 mm.""","""Overall, the reviewers find this work of enough interest to warrant acceptance. The method is an extension of prior work on reinforcement learning in medical imaging and any claims around this work being the ""first time"" should be rephrased to reflect this. """ 47,"""Enhancing Foreground Boundaries for Medical Image Segmentation""","['Deep learning', 'medical image segmentation', 'boundary enhancement.']","""Object segmentation plays an important role in the modern medical image analysis, which benefits clinical study, disease diagnosis, and surgery planning. Given the various modalities of medical images, the automated or semi-automated segmentation approaches have been used to identify and parse organs, bones, tumors, and other regions-of-interest (ROI). However, these contemporary segmentation approaches tend to fail to predict the boundary areas of ROI, because of the fuzzy appearance contrast caused during the imaging procedure. To further improve the segmentation quality of boundary areas, we propose a boundary enhancement loss to enforce additional constraints on optimizing machine learning models. The proposed loss function is light-weighted and easy to implement without any pre- or post-processing. Our experimental results validate that our loss function are better than, or at least comparable to, other state-of-the-art loss functions in terms of segmentation accuracy.""","""I agree with the majority of reviewers that the use of Laplacian based loss is interesting and the paper is well presented. I recommend the acceptance of this short paper and encourage the authors to integrate the suggestions of the reviewers in their final version.""" 48,"""Improving the Ability of Deep Networks to Use Information From Multiple Views in Breast Cancer Screening""","['Breast cancer screening', 'deep neural networks', 'multi-modal learning', 'multi-view learning.']","""In breast cancer screening, radiologists make the diagnosis based on images that are taken from two angles. Inspired by this, we seek to improve the performance of deep neural networks applied to this task by encouraging the model to use information from both views of the breast. First, we take a closer look at the training process and observe an imbalance between learning from the two views. In particular, we observe that parameters of the layers processing one of the views have larger gradient norms and contribute more to the overall loss reduction. Next, we test several methods targeted at utilizing both views more equally in training. We find that using the same weights to process both views, or using a technique called modality dropout, leads to a boost in performance. Looking forward, our results indicate improving learning dynamics as a promising avenue for improving utilization of multiple views in deep neural networks for medical diagnosis.""","""All reviewers acknowledge the importance of the paper, the fact that it is well written with clear hypotheses and good experiments, and meaningful solutions. Reviews also listed a few issues, such as a limited number of lessons and conclusions reflected by a poor discussion section, and missing details about the experimental setup. Rebuttal addressed well most of the issues identified. I support the publication of the paper, but encourage the authors to address the issues identified. In case the authors address the issues, my rating can be between weak and strong accept and the paper can be either oral or poster.""" 49,"""4D Deep Learning for Multiple-Sclerosis Lesion Activity Segmentation""","['Multiple Sclerosis', 'Lesion Activity', 'Segmentation', '4D Deep Learning']","""Multiple sclerosis lesion activity segmentation is the task of detecting new and enlarging lesions that appeared between a baseline and a follow-up brain MRI scan. While deep learning methods for single-scan lesion segmentation are common, deep learning approaches for lesion activity have only been proposed recently. Here, a two-path architecture processes two 3D MRI volumes from two time points. In this work, we investigate whether extending this problem to full 4D deep learning using a history of MRI volumes and thus an extended baseline can improve performance. For this purpose, we design a recurrent multi-encoder-decoder architecture for processing 4D data. We find that adding more temporal information is beneficial and our proposed architecture outperforms previous approaches with a lesion-wise true positive rate of 0.84 at a lesion-wise false positive rate of 0.19.""","""The presented paper appears to be well written and presents interesting and promising results on an important problem. There is still room for improvement in the validation and clarification of the methods""" 50,"""Brain Metastasis Segmentation Network Trained with Robustness to Annotations with Multiple False Negatives""","['Brain Metastasis', 'Segmentation', 'Deep Learning', 'False Negative', 'Noisy Label']","""Deep learning has proven to be an essential tool for medical image analysis. However, the need for accurately labeled input data, often requiring time- and labor-intensive annotation by experts, is a major limitation to the use of deep learning. One solution to this challenge is to allow for use of coarse or noisy labels, which could permit more efficient and scalable labeling of images. In this work, we develop a lopsided loss function based on entropy regularization that assumes the existence of a nontrivial false negative rate in the target annotations. Starting with a carefully annotated brain metastasis lesion dataset, we simulate data with false negatives by (1) randomly censoring the annotated lesions and (2) systematically censoring the smallest lesions. The latter better models true physician error because smaller lesions are harder to notice than the larger ones. Even with a simulated false negative rate as high as 50%, applying our loss function to randomly censored data preserves maximum sensitivity at 97% of the baseline with uncensored training data, compared to just 10% for a standard loss function. For the size-based censorship, performance is restored from 17% with the current standard to 88% with our lopsided bootstrap loss. Our work will enable more efficient scaling of the image labeling process, in parallel with other approaches on creating more efficient user interfaces and tools for annotation.""","""The reviews consistently emphasize that the paper is focused and mostly well-written (some concerns about equations clarity are addressed by the rebuttal). I agree with reviewer 1 about the usefulness of the base case alpha=1, beta=0 (for completeness). The authors provided such numbers in the rebuttal and should include these into the paper. As noted by reviewer 4, the authors should clearly emphasize that the paper aims at improving detection, rather than segmentation. I strongly encourage the authors to revised the title. For completeness, I would also like to encourage the authors to add technical explanation (even though it is standard) on relationship between prediction entropy and loss (2) using argmax. This would further improve readability. The methodological novelty of the paper is relatively minor (empirical study of re-weighting of standard terms), which explains the poster rating. """ 51,"""A Heteroscedastic Uncertainty Model for Decoupling Sources of MRI Image Quality""","['MRI', 'artefacts', 'uncertainty', 'quality control', 'deep learning']","""Quality control (QC) of medical images is essential to ensure that downstream analyses such as segmentation can be performed successfully. Currently, QC is predominantly performed visually at significant time and operator cost. We aim to automate the process by formulating a probabilistic network that estimates uncertainty through a heteroscedastic noise model, hence providing a proxy measure of task-specific image quality that is learnt directly from the data. By augmenting the training data with different types of simulated k-space artefacts, we propose a novel cascading CNN architecture based on a student-teacher framework with a weighted adaptive task loss to decouple sources of uncertainty related to different k-space augmentations in an entirely self-supervised manner. This enables us to predict separate uncertainty quantities for the different types of data degradation. While the uncertainty measures reflect the presence and severity of image artefacts, the network also provides the segmentation predictions given the quality of the data. We show that models trained with simulated artefacts provide more informative measures of uncertainty on real-world images and we validate our uncertainty predictions on problematic images identified by human-raters.""","""This paper tries to decouple some sources of MRI artifacts to assess image quality. All reviewers agree in the novelty of the paper and the results are sound although somehow limited due space limitation. Interesting paper worth to be presented in the MIDL """ 52,"""Red-GAN: Attacking class imbalance via conditioned generation. Yet another medical imaging perspective""",[],"""Exploiting learning algorithms under scarce data regimes is a limitation and a reality of the medical imaging field. In an attempt to mitigate the problem, we propose a data augmentation protocol based on generative adversarial networks. The networks are conditioned at a pixel-level (segmentation mask) and at a global-level information (acquisition environment or lesion type). Such conditioning provides immediate access to the image-label pairs while controlling class specific appearance of the synthesized images. To stimulate synthesis of the features relevant for the segmentation task, an additional passive player in a form of segmentor is introduced into the the adversarial game. We validate the approach on two medical datasets: BraTS, ISIC. By controlling the class distribution through injection of synthetic images into the training set we achieve control over the accuracy levels of the datasets' classes. ""","""Overall, the paper appears to be well written and present an interesting development based on previous work but the lack in quantitative validation is consistently highlighted""" 53,"""An Auto-Encoder Strategy for Adaptive Image Segmentation""","['Image Segmentation', 'Variational Auto-encoder']","""Deep neural networks are powerful tools for biomedical image segmentation. These models are often trained with heavy supervision, relying on pairs of images and corresponding voxel-level labels. However, obtaining segmentations of anatomical regions on a large number of cases can be prohibitively expensive. Thus there is a strong need for deep learning-based segmentation tools that do not require heavy supervision and can continuously adapt. In this paper, we propose a novel perspective of segmentation as a discrete representation learning problem, and present a variational autoencoder segmentation strategy that is flexible and adaptive. Our method, called Segmentation Auto-Encoder (SAE), leverages all available unlabeled scans and merely requires a segmentation prior, which can be a single unpaired segmentation image. In experiments, we apply SAE to brain MRI scans. Our results show that SAE can produce good quality segmentations, particularly when the prior is good. We demonstrate that a Markov Random Field prior can yield significantly better results than a spatially independent prior. Our code is freely available at pseudo-url. ""","""There is consensus that the technical novelty is limited, but that the results are interesting as a proof of a concept for unsupervised AE segmentation driven by second-order MRF prior combining atlas-based unary and pairwise terms. (in my opinion, ""unsupervised"" might be a better term in this case). When preparing a final version, you should take comments and criticism of the reviews very seriously (particularly for the most detailed first review). Implicit claim of better robustness to scanner variations should be removed. Relationship and differences with standard VAE should be thoroughly discussed. Limitations of your atlas-based prior should be emphasized, as discussed in many of the reviews. All other comments should also be carefully addressed.""" 54,"""Low-dose CT Enhancement Network with a Perceptual Loss Function in the Spatial Frequency and Image Domains""","['Low-dose CT image enhancement', 'convolutional neural networks', 'dual-domain deep learning']","""We propose a dual-domain cascade of U-nets (i.e. a ``W-net"") operating in both the spatial frequency and image domains to enhance low-dose CT (LDCT) images without the need for proprietary x-ray projection data. The central slice theorem motivated the use of the spatial frequency domain in place of the raw sinogram. Data were obtained from the AAPM Low-dose Grand Challenge. A combination of Fourier space (F) and/or image domain (I) U-nets and W-nets were trained with a multi-scale structural similarity and mean absolute error loss function to denoise filtered back projected (FBP) LDCT images while maintaining perceptual features important for diagnostic accuracy. Deep learning enhancements were superior to FBP LDCT images in quantitative and qualitative performance with the dual-domain W-nets outperforming single-domain U-net cascades. Our results suggest that spatial frequency learning in conjunction with image-domain processing can produce superior LDCT enhancement than image-domain-only networks. ""","""Interesting idea, well written paper. I suggest the authors to include and compare with relevant previous work as suggested by the reviewers.""" 55,"""Comparing Objective Functions for Segmentation and Detection of Tiny Lesions in Retinal Images""","['Semantic Segmentation', 'Detection', 'Diabetic Retinopathy', 'Diabetes', 'Retinal Imaging']","""Retinal microaneurysms (MAs) are the earliest signs of diabetic retinopathy (DR) which is the leading cause of blindness in the western world. MAs independently predict the risk of sight threatening DR and early detection is important to identify patients at risk. Detection and segmentation of retinal MAs present a particular challenging problem due to a large class imbalance with MA pixels accounting for less than 0.5% of the retinal image. Extreme foreground-background class imbalance can adversely affect the learning process in DNNs by introducing a bias towards the most well represented class. Recently, a number of objective functions have been proposed as alternatives to the standard Crossentropy loss in efforts to overcome this problem. In this work we investigate the influence of the network objective during optimization by comparing Residual U-nets trained for segmentation of MAs in retinal images using seven different objective functions; weighted and unweighted Crossentropy loss, Dice loss, weighted and unweighted Focal loss, Focal Dice loss and Focal Tversky loss. Three networks with different seeds are trained for each objective function using optimized hyper-parameter settings on a dataset of 382 images with pixel level annotations for MAs. The instance level MA detection performance is evaluated as the average free response receiver operator characteristic (FROC) score calculated as the mean sensitivity at seven average false positives (FPAvg) per image thresholds on 80 test images. The image level MA detection performance is evaluated as the average AUC on the same images as well as a separate test set of 1200 images. Segmentation performance is evaluated as the average pixel precision (AP). The unweighted Crossentropy loss and Focal loss outperforms all other losses for instance level detection achieving FROC scores of 0.5067(0.0115) and 0.5062(0.0045. The Focal loss has the highest pixel precision with an AP of 0.4254(0.0096). For image level detection both objective functions in their unweighted form perform significantly better compared to using all other objectives. AUCs of 0.9450(0.0080) and 0.8351(0.0039) on the two test are achieved using the unweighted Crossentropy loss, while AUCs for the unweighted Focal loss was 0.9375(0.0074) and 0.8253(0.0042) respectivly. Conclusion: Despite the promise of using training objectives designed to deal with unbalanced data, the standard Crossentropy loss perform at least as well or better than all other objective functions in our experiments for lesion level and image level detection for small retinal MAs. While a number of newer objective functions have been introduced and shown to improve performance for unbalanced datasets compared to the Dice loss in recent years, our results suggest that it is important to also benchmark new losses against the Crossentropy or Focal loss function, as we achieve the best performance in all our test using these objectives.""","""The paper attempts at an in depth comparison of loss function with appropriate hyper-parameter tuning in the application of deep learning to retinal lesion detection and segmentation. It seems however overall unclear and with a lack of proper statistical analysis (which is particularly important in a validation paper with no other novel development)""" 56,"""Spatiotemporal motion prediction in free-breathing liver scans via a recurrent multi-scale encoder decoder""","['motion prediction', 'liver', 'MRI', 'free-breathing', 'LSTM']","""In this work we propose a multi-scale recurrent encoder-decoder architecture to predict the breathing induced organ deformation in future frames. The model was trained end-to-end from input images to predict a sequence of motion labels. Targets were created by quantizing the motion fields obtained from deformable image registration. We propose a multi-scale feature extraction scheme in the spatial encoder which processes the input at different resolutions. We report results using MRI free-breathing acquisitions from 12 volunteers. Experiments were aimed at investigating the proposed multi-scale design and the effect of increasing the number of predicted frames on the overall accuracy of the model. The proposed model was able to predict vessel positions in the next temporal image with a mean accuracy of 2.03 (2.89) mm showing increased performance in comparison with state-of-the-art approaches.""","""Two reviewers expressed substantial interest and value of the approach for motion prediction, one was somewhat interested and the last one provided a very short review at the last minute. I agree that the paper is neither fully comprehensible nor well validated, but given the page limit the authors present an interesting idea that might deserve further discussion. It should be possible to slightly improve the clarity given the very detailed and comprehensive reviews. In particular the question how exactly the quantisation of motion fields was performed and how it relates to discrete displacement registration would be an important one to answer in the final version.""" 57,"""The Performance of Deep U-Net Pre-Clinical Organ-wise Segmentation in the Presence of Low Counting Statistics.""","['deep learning', 'pre-clinical imaging', 'PET-CT', 'image segmentation', 'artificial intelligence']","""Micro-PET-CT allows non-invasive monitoring of biological processes, disease progression and therapy response. Morphological information provided by the CT allows organ / tissue delineation for subsequent quantification of the physiological information depicted by the PET. Deep learning with convolutional neural networks (CNNs) has achieved state-of-the-art performance for automated medical image segmentation and utilized successfully by our group in Micro-PET-CT (figure 1). The robustness of such approaches in the presence of noise addition / dose reduction of the CT data has not been explored. We thus simulate dose reduction of pre-clinical CT images using a Poisson noise model and evaluate the effect of segmentation performance with increasingly lower dose for 7 regions (skeleton, kidney, bladder, brain, lung, muscle and fat). data in the preclinical model with increase dose reduction. It can be observed that the accuracy of the segmentation measured by the DICE coefficient falls off as we simulate the reduction of CT dose. A 50% dose reduction was observed for all 5 test subjects to result in a mean (across all 7 organs) percentage reduction in DICE <25%. Adequate performance however is still observed in the DICE coefficient with a dose reduction of 30%, only an average of ~10% reduction in DICE is observed. This may have implications for utilising reduced dose CT coupled with a deep CNN for segmentation if the CT component is used to anatomically locate physiology on PET data.""","""We have all reviewers rating this paper as 'strong reject'. This is a clear call and, hence, I would agree. """ 58,"""Generating Fundus Fluorescence Angiography Images from Structure Fundus Images Using Generative Adversarial Networks""","['Fundus Fluorescence Angiography Image', 'Structure Fundus Image', 'Image Translation', 'Generative Adversarial Network', 'Local Saliency Map']","""Fluorescein angiography can provide a map of retinal vascular structure and function, which is commonly used in ophthalmology diagnosis, however, this imaging modality may pose risks of harm to the patients. To help physicians reduce the potential risks of diagnosis, an image translation method is adopted. In this work, we proposed a conditional generative adversarial network (GAN)-based method to directly learn the mapping relationship between structure fundus images and fundus fluorescence angiography (FFA) images. Moreover, local saliency maps, which define each pixels importance, are used to define a novel saliency loss in the GAN cost function. This facilitates more accurate learning of small-vessel and fluorescein leakage features. The proposed method was validated on our dataset and the publicly available Isfahan MISP dataset with the metrics of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The experimental results indicate that the proposed method can accurately generate both retinal vascular and fluorescein leakage structures, which has great practical significance for clinical diagnosis and analysis.""","""This paper presents a method to generate Fundus Fluorescence Angiography from conventional fundus imaging. The work is based on a GAN framework modifying the loss function to include both local and global terms. All reviewers are very consistent on the clarity of the work and the thoroughness of the evaluation while also indicating that there is limited novelty on the methodology. I recommend this paper be accepted if possible. The score would be around 3.3.""" 59,"""Adversarial Domain Adaptation for Cell Segmentation""","['Cell Segmentation', 'Unsupervised and Semi-supervised Domain Adaptation']","""To successfully train a cell segmentation network in fully-supervised manner for a particular type of organ or cancer, we need the dataset with ground-truth annotations. However, high unavailability of such annotated dataset and tedious labeling process enforce us to discover a way for training with unlabeled dataset. In this paper, we propose a network named CellSegUDA for cell segmentation on the unlabeled dataset (target domain). It is achieved by applying unsupervised domain adaptation (UDA) technique with the help of another labeled dataset (source domain) that may come from other organs or sources. We validate our proposed CellSegUDA on two public cell segmentation datasets and obtain significant improvement as compared with the baseline methods. Finally, considering the scenario when we have a small number of annotations available from the target domain, we extend our work to CellSegSSDA, a semi-supervised domain adaptation (SSDA) based approach. Our SSDA model also gives excellent results which are quite close to the fully-supervised upper bound in target domain.""","""Three out of four reviewers recommended ""weak accept"", being convinced by the value of application on histology images, bi-directional evaluation of the adaptation method and clear presentation of this paper. I also think domain adaptation is an important problem to be tackled in the field of digital histopathology. This paper will add good value to this research topic. The final version should address the questions mentioned by the reviewers.""" 60,"""Medical Image Segmentation via Unsupervised Convolutional Neural Network""","['Image Segmentation', 'Convolutional Neural Networks', 'Unsupervised Learning']","""For the majority of the learning-based segmentation methods, a large quantity of high-quality training data is required. In this paper, we present a novel learning-based segmentation model that could be trained semi- or un- supervised. Specifically, in the unsupervised setting, we parameterize the Active contour without edges (ACWE) framework via a convolutional neural network (ConvNet), and optimize the parameters of the ConvNet using a self-supervised method. In another setting (semi-supervised), the auxiliary segmentation ground truth is used during training. We show that the method provides fast and high-quality bone segmentation in the context of single-photon emission computed tomography (SPECT) image.""","""2 out of 4 reviewers suggested weak acceptance of this work, while the other 2 suggested weak rejection. However, most of them acknowledged that the method introduces some interesting methodological insights (by combining active contour models with deep neural networks) and the main critic seems to be related to the lack of validation in non-simulated data. Since this is a short paper, even if the experimental validation is not as strong as it could be, I'm inclined to think that this work should be accepted for publication since it introduces a novel idea for the MIDL community. Note that the reviewers have expressed many constructive comments which could improve the quality of the final manuscript. Please, take them into account when submitting the camera ready version.""" 61,"""End-to-End Breast Mass Classification on Digital Breast Tomosynthesis ""","['Breast', 'Mammography', 'Deep Learning', 'Neural Network', 'Classification']","""Automatic classification of the masses in digital breast tomosynthesis(DBT) is still a big challenge and play a crucial role to assist radiologists for accurate diagnosis.In this paper,we develop a End-to-End multi-scale multi-level features fusion Network (EMMFFN) model for breast mass classication using DBT. Three multifaceted representations of the breast mass (gross mass, overview, and mass background) are extracted from the ROIs and then, fed into the EMMFFN model at the same time to generated three sets of feature map.The three feature maps are finally fused at the feature level to generate the final prediction.Our result show that the EMMFFN model achieves a breast mass classification AUC of 85.09%,which was superior to the single submodel who only use one aspect of patch.""","""All reviewers suggest that the proposed method is a quite standard multi-modal approach, but interesting in the sense that it uses different types of input from DBT. Nevertheless, the paper has important issues that need to be addressed. For instance, there are implementation details missing, where some design decisions are not well justified. In particular, why does the paper use these three patch types? Also, the multi-modal approach does not seem to perform significantly better than the mono-modal ones. Another issue was the specificity result that decreases with an increasing accuracy, which is a bit odd. To summarise, the paper shows a standard multi-modal classification that relies on new input types from DBT, but it lacks implementation details and shows results that do not seem to be relevant. Therefore, I agree with the reject rating by the reviewers.""" 62,"""An interpretable automated detection system for FISH-based HER2 oncogene amplification testing in histo-pathological routine images of breast and gastric cancer diagnostics""","['FISH imaging', 'HER2 amplification status', 'gastric cancer', 'breast cancer', 'digital pathology', 'deep learning', 'image classification', 'object segmentation and localization', 'interpretability']","""Histo-pathological diagnostics are an inherent part of the everyday work but are particularly laborious and often associated with time-consuming manual analysis of image data. In order to cope with the increasing diagnostic case numbers due to the current growth and demographic change of the global population and the progress in personalized medicine, pathologists ask for assistance. Profiting from digital pathology and the use of artificial intelligence, individual solutions can be offered (e.g. to detect labeled cancer tissue sections). The testing of the human epidermal growth factor receptor 2 (HER2) oncogene amplification status via fluorescence in situ hybridization (FISH) is recommended for breast and gastric cancer diagnostics and is regularly performed at clinics. Here, we developed a comprehensible, multi-step deep learning-based pipeline which automates the evaluation of FISH images with respect to HER2 gene amplification testing. It mimics the pathological assessment and relies on the detection and localization of interphase nuclei based on instance segmentation networks. Furthermore, it localizes and classifies fluorescence signals within each nucleus with the help of image classification and object detection convolutional neural networks (CNNs). Finally, the pipeline classifies the whole image regarding its HER2 amplification status. The visualization of pixels on which the networks' decision occurs, complements an essential part to enable interpretability by pathologists.""","""Most reviewers suggest acceptance of the paper, whereas one reviewer suggests a strong rejection. The reviewers suggesting acceptance indicate that the paper is well-written, easy to follow and that the results look very promising. The pipeline is generally well-described and the tasks of the individual components is clear. I do agree with the reviewer recommending rejection that some important details are missing and that these could have been added to the paper (e.g. dataset splits). However, I think rejection would be too harsh, given that this is a short paper and quite and extensive method, the authors had to choose what to include and what not. As such, I lean towards acceptance.""" 63,"""Automatic Segmentation of Head and Neck Tumors and Nodal Metastases in PET-CT scans""","['Head and neck cancer', 'deep learning', 'multimodal', 'PET-CT', '3D segmentation']","""Radiomics, the prediction of disease characteristics using quantitative image biomarkers from medical images, relies on expensive manual annotations of Regions of Interest (ROI) to focus the analysis. In this paper, we propose an automatic segmentation of Head and Neck (H&N) tumors and nodal metastases from FDG-PET and CT images. A fully-convolutional network (2D and 3D V-Net) is trained on PET-CT images using ground truth ROIs that were manually delineated by radiation oncologists for 202 patients. The results show the complementarity of the two modalities with a statistically significant improvement from 48.7% and 58.2% Dice Score Coefficients (DSC) with CT- and PET-only segmentation respectively, to 60.6% with a bimodal late fusion approach. We also note that, on this task, a 2D implementation slightly outperforms a similar 3D design (60.6% vs 59.7% for the best results respectively). The data is publicly available and the code will be shared on our GitHub repository.""","""We have four reviewers voting for an accept (one of them updating a 'weak accept' after the authors have responded to her/his critique). I would agree and recommend an 'accept' as well.""" 64,"""Adding Attention to Subspace Metric Learning""","['Deep Metric Learning', 'Attention mechanism', 'Medical Image', 'Subspace Embedding', 'Skin lesion imaging', 'Interpretability']","""Deep metric learning is a compelling approach to learn an embedding space where the images from the same class are encouraged to be close and images from different classes are pushed away. Current deep metric learning approaches are inadequate to explain visually which regions contribute to the learning embedding space. Visual explanations of images are particularly of interest in medical imaging, since interpretation directly impacts the diagnosis, treatment planning and follow-up of many diseases. In this work, we propose a novel attention-based metric learning approach for medical images and seek to bridge the gap between visual interpretability and deep metric learning. Our method builds upon a divide-and-conquer strategy, where multiple learners refine subspaces of a global embedding. Furthermore, we integrated an attention module that provides visual insights of discriminative regions that contribute to the clustering of image sets and to the visualization of their embedding features. We evaluate the benefits of using an attention-based approach for deep metric learning in the tasks of image clustering and image retrieval using a public benchmark on skin lesion detection. Our attentive deep metric learning improves the performance over recent state-of-the-art, while also providing visual interpretability of image similarities. ""","""Three groups of selected quotes from reviewers that were not fully addressed by the rebuttal and are sufficient to justify a reject: 1) the only difference of the proposed method from the CVPR2019 reference is adding attention modules... the presented work is incremental with limited novelty. 2) The attention model is not described well and is not motivated properly for this problem 3) The results are shown only for one dataset... The empirical results are not convincing enough... this dataset is not the best suited for demonstrating this idea""" 65,"""Calcium Score prediction and CAC localization using anisotropic convolutional networks.""","['CAC', 'Calcium Score', 'Cardiac CT', 'Deep Learning', 'Semantic Segmentation']","""AbstractIn this work we propose to apply deep learning semantic segmentation techniques to calcium quantification and localization. 3D CT chest imaging is an essential support to diagnosis of cardiovascular disease and coronaires calcification burden is one of its strongest indicators. CAC is quantified using a per coronary branch Agatston score. In this clinical context using deep learning techniques for multi class segmentation we could design an algorithm that automatically localizes and quantifies calcifications volumetry and Agatston score. The architecture used is inspired by Vnet [7], a popular model adapted to this particular CT exam modality, the key contribution is the use of anisotropic pooling and unpooling layers. 124 patients were provided by ***** and manually annotated by experts with clinical feedback. As a result we could achieve 0.9 average R2 = 1 - rMSE (relative mean square error) on multiple branches on a test set of 14 patient left out from the whole dataset. Index TermsCAC, Calcium Score, Cardiac CT, Deep Learning, Semantic Segmentation.""","""The paper received clear feedback that it was unfinished and did not meet the criteria for MIDL. I encourage the authors to continue development and submit more complete work to future conferences. I thank the reviewers for also giving specific suggestions on how to improve the paper. """ 66,"""Tensor Networks for Medical Image Classification""","['Tensor Networks', 'Image classification', 'Histopathology', 'Lung nodules']","""With the increasing adoption of machine learning tools like neural networks across several domains, interesting connections and comparisons to concepts from other domains are coming to light. In this work, we focus on the class of Tensor Networks, which has been a work horse for physicists in the last two decades to analyse quantum many-body systems. Building on the recent interest in tensor networks for machine learning, we extend the Matrix Product State tensor networks (which can be interpreted as linear classifiers operating in exponentially high dimensional spaces) to be useful in medical image analysis tasks. We focus on classification problems as a first step where we motivate the use of tensor networks and propose adaptions for 2D images using classical image domain concepts such as local orderlessness of images. With the proposed locally orderless tensor network model (LoTeNet), we show that tensor networks are capable of attaining performance that is comparable to state-of-the-art deep learning methods. We evaluate the model on two publicly available medical imaging datasets and show performance improvements with fewer model hyperparameters and lesser computational resources compared to relevant baseline methods.""","""This submission explores a very interesting concept, implicitly modelling high-dimensional decision boundaries, that has not been used a lot in medical deep learning. They build upon related work from computer vision and extend the concept to larger images, capturing both local and global information. All reviewers show high interest in this work and recommend acceptance (at least after the rebuttal). There was a fruitful discussion among reviewers and authors that will subsequently improve the final version. While the experimental validation is limited to rather small 2D patches (128x128) the results are promising and there are may certainly come interesting future papers that further extend these concepts.""" 67,"""Tractometry-based Anomaly Detection for Single-subject White Matter Analysis""","['Diffusion MRI', 'Tractometry', 'Anomaly Detection', 'Autoencoder']","""There is an urgent need for a paradigm shift from group-wise comparisons to individual diagnosis in diffusion MRI (dMRI) to enable the analysis of rare cases and clinically-heterogeneous groups. Deep autoencoders have shown great potential to detect anomalies in neuroimaging data. We present a framework that operates on the manifold of white matter (WM) pathways to learn normative microstructural features, and discriminate those at genetic risk from controls in a paediatric population. ""","""Although reviewers have raised concerns in terms of novelty and validation, most of them have given a positive evaluation. I therefore recommend acceptance. """ 68,"""Using Generative Models for Pediatric wbMRI""","['machine learning', 'generative models', 'cancer detection', 'MRI', 'whole body MRI']","""Early detection of cancer is key to a good prognosis and requires frequent testing, especially in pediatrics. Whole-body magnetic resonance imaging (wbMRI) is an essential part of several well-established screening protocols with screening starting in early childhood. To date, machine learning (ML) has been used on wbMRI images to stage adult cancer patients. It is not possible to use such tools in pediatrics due to the changing bone signal throughout growth, the difficulty of obtaining these images in young children due to movement and limited compliance, and the rarity of positive cases. We evaluate the quality of wbMRI images generated using generative adversarial networks (GANs) trained on wbMRI data from a pediatric hospital. We use the Frchet Inception Distance (FID) metric, Domain Frchet Distance (DFD), and blind tests with a radiology fellow for evaluation. We demonstrate that StyleGAN2 provides the best performance in generating wbMRI images with respect to all three metrics.""","""This paper compares several GAN methods for generating pediatric wbMRI images. The paper is well written and results although limited are clear and interesting. The reviewers see merits in such a short paper and mostly think that it is worth to be presented at MIDL. """ 69,"""Automated Labelling using an Attention model for Radiology reports of MRI scans (ALARM)""","['NLP', 'BERT', 'BioBERT', 'automatic labelling']","""Labelling large datasets for training high-capacity neural networks is a major obstacle to the development of deep learning-based medical imaging applications. Here we present a transformer-based network for magnetic resonance imaging (MRI) radiology report classification which automates this task by assigning image labels on the basis of free-text expert radiology reports. Our models performance is comparable to that of an expert radiologist, and better than that of an expert physician, demonstrating the feasibility of this approach. We make code available online for researchers to label their own MRI datasets for medical imaging applications.""","""All reviewers recommend acceptance of the paper and the authors tried to address any remaining comments. I also think this is a topic with a lot of interest from the MIDL community. """ 70,"""3D FLAT - Feasible Learned Acquisition Trajectories for Accelerated MRI""","['Magnetic Resonance Imaging', '3D MRI', 'fast image acquisition', 'acceleration', 'image reconstruction', 'neural networks', 'deep learning', 'compressed sensing']","""Magnetic Resonance Imaging (MRI) is the gold standard of today's diagnostic imaging. The most significant drawback of MRI is the long acquisition time prohibiting its use in standard practice for some applications. Compressed sensing (CS) proposes to subsample the k-space (the Fourier domain dual to the physical space of spatial coordinates) leading to significantly accelerated acquisition. However, the benefit of compressed sensing has, to an extent, remained only theoretical because most of the sampling densities obtained through CS do not obey the stringent constraints of the MRI machine imposed in practice. Inspired by recent success of deep learning-based approaches for image reconstruction and ideas from computational imaging on learning-based design of imaging systems, we introduce 3D FLAT, a novel protocol to accelerate MRI acquisition in 3D. Our proposal leverages the entire 3D k-space to simultaneously learn a physically feasible acquisition trajectory with the reconstruction method. Experimental results suggest that 3D FLAT achieves a higher image quality for a given readout time compared to standard trajectories such as radial, stack-of-stars, or 2D learned trajectories. Furthermore, for the first time, we demonstrate the significant benefit of performing MRI acquisitions using non-Cartesian 3D trajectories over 2D non-Cartesian trajectories acquired slice-wise. ""","""The methodology presented is interesting. The main limitation seems to be its practical uses in the real machine. I suggest acceptance at this point but hope the authors may consider to improve it for real applications. Another concern I have is the complex-valued nature of the data, have you used complex convolution like Wang's group? Please properly refer to DeepcomplexMRI: Exploiting deep residual network for fast parallel MR imaging with complex convolution and Accelerating magnetic resonance imaging via deep learning; These works need to be properly cited as well. """ 71,"""Applying Machine Learning Algorithms for Kidney Disease Diagnosis""","['Kidney disease', 'Chronic kidney disease', 'Classification', 'Multi layer perceptron (MLP)', 'SVM', 'KNN']","""Kidney disease is a silent killer; it usually develops over time, and many people do not know they have it until it is very far along. Chronic Kidney Disease (represents a major public health problem in both developed and developing countries. In this rese arch, three different classification algorithms have been used in order to evaluate the occurrence of ckd on a dataset collected from the UCI Repository which holds 400 samples with 25 attributes. By filtering out t he top 14 features from the 2 4 input variables which show high score of dependability, the experimental results manifest t hat Multi Layer Perceptron (MLP) works best for both normalized and unnormalized features yielding an accuracy of 100%.""","""This paper applies standard machine learning classifiers (SVM, MLP, kNN) to a non-imaging dataset predicting kidney disease. While the reviewers agree that the analysis is sound, the novelty is very limited and the relevance for MIDL is low given that the paper does not use medical imaging data.""" 72,"""Towards multi-sequence MR image recovery from undersampled k-space data""",[],"""Undersampled MR image recovery has been widely studied for accelerated MR acquisition. However, it has been mostly studied under a single sequence scenario, despite the fact that multi-sequence MR scan is common in practice. In this paper, we aim to optimize multi-sequence MR image recovery from undersampled k-space data under an overall time constraint while considering the difference in acquisition time for various sequences. We first formulate it as a constrained optimization problem and then show that finding the optimal sampling strategy for all sequences and the best recovery model at the same time is combinatorial and hence computationally prohibitive. To solve this problem, we propose a blind recovery model that simultaneously recovers multiple sequences, and an efficient approach to find proper combination of sampling strategy and recovery model. Our experiments demonstrate that the proposed method outperforms sequence-wise recovery, and sheds light on how to decide the undersampling strategy for sequences within an overall time budget.""","""The paper proposes to recontruct multi-sequence MR image from undersampled k-space data. Using a blind recovery model to evaluate sampling strategies is a creative idea. But some related work has to be cited. For example, the idea of employing deep learning for multicontrast MRI imaging was explored in ""Feasibility of Multi-Contrast MR Imaging Via Deep Learning in ISMRM 2017""""" 73,"""Well-Calibrated Regression Uncertainty in Medical Imaging with Deep Learning""","['bayesian approximation', 'variational inference']","""The consideration of predictive uncertainty in medical imaging with deep learning is of utmost importance. We apply estimation of predictive uncertainty by variational Bayesian inference with Monte Carlo dropout to regression tasks and show why predictive uncertainty is systematically underestimated. We suggest to use scaling with a single scalar value; a simple, yet effective calibration method for both aleatoric and epistemic uncertainty. The performance of our approach is evaluated on a variety of common medical regression data sets using different state-of-the-art convolutional network architectures. In all experiments, $ scaling is able to reliably recalibrate predictive uncertainty, surpassing more complex calibration methods. It is easy to implement and maintains the accuracy. Well-calibrated uncertainty in regression allows robust rejection of unreliable predictions or detection of out-of-distribution samples.""","""All reviewers highlight the importance of the topic being adressed and 3 out of 4 reviewers agree that the paper introduces an interesting treatment of the problem with convincing evaluation results. The main criticism from the remaining reviewer pertains to the methodological treatment of the graphical model. I feel these comments are fair and already led to interesting discussion with the authors who agreed on going for a more rigorous treatment in a revised version of the manuscript. Overall, I feel the paper will lead to interesting discussions at the conference and includes sufficient material for presentation at the conference.""" 74,"""Bounding boxes for weakly supervised segmentation: Global constraints get close to full supervision""","['CNN', 'image segmentation', 'weak supervision', 'bounding boxes', 'global constraints', 'Lagrangian optimization', 'log-barriers']","""We propose a novel weakly supervised learning segmentation based on several global constraints derived from box annotations. Particularly, we leverage a classical tightness prior to a deep learning setting via imposing a set of constraints on the network outputs. Such a powerful topological prior prevents solutions from excessive shrinking by enforcing any horizontal or vertical line within the bounding box to contain, at least, one pixel of the foreground region. Furthermore, we integrate our deep tightness prior with a global background emptiness constraint, guiding training with information outside the bounding box. We demonstrate experimentally that such a global constraint is much more powerful than standard cross-entropy for the background class. Our optimization problem is challenging as it takes the form of a large set of inequality constraints on the outputs of deep networks. We solve it with sequence of unconstrained losses based on a recent powerful extension of the log-barrier method, which is well-known in the context of interior-point methods. This accommodates standard stochastic gradient descent (SGD) for training deep networks, while avoiding computationally expensive and unstable Lagrangian dual steps and projections. Extensive experiments over two different public data sets and applications (prostate and brain lesions) demonstrate that the synergy between our global tightness and emptiness priors yield very competitive performances, approaching full supervision and outperforming significantly DeepCut. Furthermore, our approach removes the need for computationally expensive proposal generation. Our code is shared anonymously. ""","""3 out of 4 reviewers recommended strong acceptance of this work, while 1 reviewer recommended weak rejection. After reading their comments and discussion with the authors, I see that most of the issues raised by the reviewers have been addressed in the authors' response. I therefore think this work can be accepted for publication at MIDL. Please, when submitting the Camera Ready version, take into account the suggestions made by the reviewers.""" 75,"""SAU-Net: Efficient 3D Spine MRI Segmentation Using Inter-Slice Attention""","['spine segmentation', 'MRI', 'deep learning', 'inter-slice attention']","""Accurate segmentation of spine Magnetic Resonance Imaging (MRI) is highly demanded in morphological research, quantitative analysis, and diseases identification, such as spinal canal stenosis, disc herniation and degeneration. However, accurate spine segmentation is challenging because of the irregular shape, artifacts and large variability between slices. To alleviate these problems, spatial information is used for more continuous and accurate segmentation such as by 3D convolutional neural networks (CNN) . However, 3D CNN suffers from higher computational cost, memory cost and risk of over-fitting, especially for medical images where the number of labeled data is limited. To address these problems, we apply the attention mechanism for the utilization of inter-slice information in 3D segmentation tasks based on 2D convolutional networks and propose a spatial attention-based densely connected U-Net (SAU-Net), which consists of Dense U-Net for extraction of intra-slice features and an inter-slice attention module (ISA) to utilize inter-slice information from adjacent slices and refine the segmentation results. Experimental results demonstrate the effectiveness of ISA as well as higher accuracy and efficiency of segmentation results of our method compared with other deep learning methods.""","""The majority of reviews are generally positive. The good performance compared to a baseline 2D and 3D U-Net implementation is promising. The authors have addressed the major concerns raised by the negative review in their rebuttal sufficiently well. The requested surface-based evaluation metrics show again a promising performance of the proposed method and can be added to the final paper.""" 76,"""Functional Space Variational Inference for Uncertainty Estimation in Computer Aided Diagnosis""","['Uncertainty estimation', 'Variational Inference', 'Calibration', 'Skin Lesion']","""Deep neural networks have revolutionized medical image analysis and disease diagnosis. Despite their impressive performance, it is dicult to generate well-calibrated probabilistic outputs for such networks, which makes them uninterpretable black boxes. Bayesian neural networks provide a principled approach for modelling uncertainty and increasing patient safety, but they have a large computational overhead and provide limited improvement in calibration. In this work, by taking skin lesion classication as an example task, we show that by shifting Bayesian inference to the functional space we can craft meaningful priors that give better-calibrated uncertainty estimates at a much lower computational cost""","""All reviewers agree that, despite some presentation flaws, the work pursues an interest direction and provides a new interesting perspective. While I understand some of the more serious concerns raised by Rev. 3 in terms of lack of detail or the paper being half baked, I also think that one also needs to factor in the short paper format of the submission.""" 77,"""dMRI-SRGAN: Diffusion MRI Super Resolution GAN""","['Diffusion MRI (dMRI)', 'Diffusion Spatial Super-resolution', 'Generative adversarial networks (GANs)', 'dMRI-SRGAN']","""The ability to acquire high resolution (HR) images of the brain is essential for an enhanced in vivo diagnosis, prognosis and monitoring of diseases, as well as an improved analysis of brain anatomy and physiology. Diffusion Magnetic Resonance Imaging (dMRI) is one of the imaging technique that suffers the most the lack of high resolution imaging method. One way to overcome this limitation is to use super resolution (SR) methods, which are generative techniques that can be applied post-acquisition to increase the resolution of the data. The most straightforward approach is to minimize a loss function in a self-supervised way. Although self-supervised Generative Adversarial Networks (GANs) have been shown to obtain realistic HR images from low resolution images, they do not provide sufficient information to minimize error. Therefore, there are two basic ways to provide extra information and constraint during the training phase. The first is to add information using labeled data during training and the second one is to manipulate the data and the network to use inherent information in the data to improve generated SR image. However, in this semi-supervised approach, the labeling process is not simple and extremely time-consuming. In this paper, we propose a novel self-supervision method for diffusion MRI, called dMRI-SRGAN, to reduce the reconstruction error using extra information to create SR of dMRI data. We compare our new method with the baseline SRGAN approach and semi-SRGAN approach that uses label information. Our proposed method gives better PSNR value with respect to baseline method and best fractional anisotropy (FA) value among the three methods. Moreover we compare the structural connectome analysis of the SR images with the original data and show that our SR method is able to preserve the brain connectivity.""","""This paper presents a Diffusion MRI Super-Resolution method based on GANs. As stated by some reviewers is it impossible to evaluate correctly the proposed method due to the lack of relevant similar methods (Coupe et al. 2013 for example). The authors only compare with other GAN based methods ignoring other relevant non-deep learning-based approaches. Besides, the results are only marginally improving other similar methods and the images produced by the proposed method show evident slice artifacts. """ 78,"""Training deep segmentation networks on texture-encoded input: application to neuroimaging of the developing neonatal brain""","['Segmentation', 'convolutional neural networks', 'local binary patterns', 'texture', 'neuroimaging', 'neonatal', 'developing brain.']","""Standard practice for using convolutional neural networks (CNNs) in semantic segmentation tasks assumes that the image intensities are directly used for training and inference. In natural images this is performed using RGB pixel intensities, whereas in medical imaging, e.g. magnetic resonance imaging (MRI), gray level pixel intensities are typically used. In this work, we explore the idea of encoding the image data as local binary textural maps prior to the feeding them to CNNs, and show that accurate segmentation models can be developed using such maps alone, without learning any representations from the images themselves. This questions common consensus that CNNs recognize objects from images by learning increasingly complex representations of shape, and suggests a more important role to image texture, in line with recent findings on natural images. We illustrate this for the first time on neuroimaging data of the developing neonatal brain in a tissue segmentation task, by analyzing large, publicly available T2-weighted MRI scans (n=558, range of postmenstrual ages at scan: 24.3 - 42.2 weeks) obtained retrospectively from the Developing Human Connectome Project cohort. Rapid changes in visual characteristics that take place during early brain development make it important to establish a clear understanding of the role of visual texture when training CNN models on neuroimaging data of the neonatal brain; this yet remains a largely understudied but important area of research. From a deep learning perspective, the results suggest that CNNs could simply be capable of learning representations from structured spatial information, and may not necessarily require conventional images as input. ""","""Although methodological contributions are somewhat limited, the paper provides an interesting analysis on the role of texture for training a segmentation CNN, and demonstrates that texture in neonatal brain images can be used instead of original images to train the network. After the rebuttal, the majority of reviewers are in favour of accepting the paper. """ 79,"""Multitask radiological modality invariant landmark localization using deep reinforcement learning""","['multitask', 'reinforcement learning', 'landmark', 'MRI', 'multiparametric', 'radiology', 'deep learning', 'segmentation']","""Deep learning techniques are increasingly being developed for several applications in radiology, for example landmark and organ localization with segmentation. However, these applications to date have been limited in nature, in that, they are restricted to just a single task e.g. localization of tumors or to a specific organ using supervised training by an expert. As a result, to develop a radiological decision support system, it would need to be equipped with potentially hundreds of deep learning models with each model trained for a specific task or organ. This would be both space and computationally expensive. In addition, the true potential of deep learning methods in radiology can only be achieved when the model is adaptable and generalizable to multiple different tasks. To that end, we have developed and implemented a multitask modality invariant deep reinforcement learning framework (MIDRL) for landmark localization and segmentation in radiological applications. MIDRL was evaluated using a diverse data set containing multiparametric MRIs (mpMRI) acquired from different organs and with different imaging parameters. The MIDRL framework was trained to localize six different anatomical structures throughout the body, including, knee, trochanter, heart, kidney, breast nipple, and prostate across T1 weighted, T2 weighted, Dynamic Contrast Enhanced (DCE), Diffusion Weighted Imaging (DWI), and DIXON MRI sequences obtained from twenty-four breast, eight prostate, and twenty five whole body mpMRIs. The trained MIDRL framework produced excellent accuracy in localizing each of the six anatomical landmarks with an average dice similarity pseudo-formula 0.77, except for breast nipple localization in DCE. In conclusion, we developed a multitask deep reinforcement learning framework and demonstrated MIDRLs potential towards the development of a general AI for a radiological decision support system.""","""An interesting application of reinforcement learning in medical imaging. The reviewers seem to agree on this as well. Two out of three reviewers recommend 'Weak Accept', whereas one recommends 'Weak Reject'. I think the authors did a good job with their rebuttal, including new experiments and results which were requested by the reviewers. As such I recommend acceptance pending the condition that the authors include these new results in their camera-ready version.""" 80,"""A Fully Convolutional Normalization Approach of Head and Neck Cancer Outcome Prediction""","['Classification', 'head and neck cancer', 'deep learning', 'PET-CT', 'UNet-FCN', 'multi-domain', 'radiotherapy', 'outcome survival prediction']","""Medical image classification performance worsens in multi-domain datasets, caused by radiological image differences across institutions, scanner manufacturer, model and operator. Deep learning is well-suited for learning image features with priors encoded as constraints during the training process. In this work, we apply a ResNeXt classification network augmented with an FCN preprocessor subnetwork to a public TCIA head and neck cancer dataset. The training goal is survival prediction of radiotherapy cases based on pre-treatment FDG-PET/CT scans, acquired across 4 different hospitals. We show that the preprocessor sub-network acts as a embedding normalizer and improves over state-of-the-art results of 70% AUC to 76%.""","""The paper evaluates the performance of a model based on a UNet pre-processor followed by a ResNeXt classifier for survival prediction in head and neck cancer patients, using CT and PET/CT images. The paper is well-presented on the whole, the ideas are up-to-date and clearly described, and the ablation study is interesting. However, there is some slight caveat regarding the experimental results. As has been noted by all reviewers, the proposed approach is compared to state of the art methods with different inputs, and on a different dataset. So there is noconclusiveevidence as to whether the proposed approach is superior to existing ones. Some reviewers also note that the very beginning of the abstract (2 first sentences) that may be misleading and should be rewritten, and that additional details regarding training are missing.""" 81,"""Neuromorphologicaly-preserving Volumetric data encoding using VQ-VAE""","['3D', 'MRI', 'Morphology', 'Encoding', 'VQ-VAE']","""Due to recent advancements in both hardware and software, Deep Learning applied to medical imaging has become feasible at higher resolutions. Even so, due to the sheer size of a single image the models and the convergence speeds are being hindered. Recently the Vector-Quantisation Variational Autoencoder shown promising results in generating realistic images while compressing them to ~2% of their original size. Here, we show that a VQ-VAE inspired network can be used to compress the data to ~3% while maintaining reconstructed images that adhere to the same morphological and tissue statistics as the original data. Furthermore, we show that one can use one of our models that was trained on widely available neurologically healthy patients and fine-tune on pathological ones, thus allowing faster training times. ""","""There is agreement among the reviewers over both some positive aspects and some negative. I've carefully reviewed the material and concluded that the paper is not ready for publication. All reviewers seem to agree that this is an interesting application of the VQVAE work to medical images (and I agree as well). The novelty itself is challenged by all reviewers, and one important (and likely important) answer from the authors is that there is work involved to scaling up VQ-VAE and making it stable -- and while this is true, this answers the challenges of the work, not the amount of novelty (it also seems weak to use runtime as an argument for why ablation studies cannot be done). I think that a proper and thorough application paper is also appropriate for MIDL. Unfortunately, the reviewers challenge the scope and story of the paper as well, and I agree that there is significant confusion/contradiction, including the responses. For example, there is significant confusion about the generative aspect of the generative models, to which the authors say (R2, second para of response) that sampling from a generative model was beyond the scope of the paper, and what they focus on compression. But then, when asked to compare with compression mechanisms, they emphasize (R4, second para of response) that there is value in having a generative model vs just a compression algorithm. Surely if generative models are important they should show generative behaviour (e.g. sampling) or if compression is important than comparing with compression algorithms (even classical ones based on wavelets, etc) is warranted. Furthermore, the application to the medical domains is questioned (appropriately, I think), with the response being that they are important but beyond the scope of the paper. Individually, some of these answers are sensible, but together they highlight why the paper falls short of either a strong methodological paper or a strong application paper. There are good ideas in this paper, and I really encourage the authors to continue. My main suggestion coming our from the review process is that first the story and positioning needs to be straightened out, and based on the that appropriate changes be made. For example, if methodology is emphasized, then more novelty seems to be necessary, and either sampling should be addressed as part of the generative model, or the generative claim/emphasis should be dropped (but the experimental tasks are appropriate). If the application is emphasized, then the method should be applied to more than the current applications, but maybe one that the authors stated as the downstream goal, such as anomaly detection. I hope the authors take the productive comments here and improve the paper, it would be great to have it published in a future conference.""" 82,"""Weakly Supervised Lesion Localization With Probabilistic-CAM Pooling""","['Chest X-rays', 'Lesion localization', 'Weakly supervised learning']","""Localizing thoracic diseases on chest X-ray plays a critical role in clinical practices such as diagnosis and treatment planning. However, current deep learning based approaches often require strong supervision, e.g. annotated bounding boxes, for training such systems, which is infeasible to harvest in large-scale. We present Probabilistic Class Activation Map (PCAM) pooling, a novel global pooling operation for lesion localization with only image-level supervision. PCAM pooling explicitly leverages the excellent localization ability of CAM (Zhou et al., 2016) during training in a probabilistic fashion. Experiments on the ChestX-ray14 (Wang et al., 2017) dataset show our method outperforms state-of-the-art baseline on the localization task. Visual examination on the probability maps generated by PCAM pooling shows clear and sharp boundaries around lesion regions compared to the localization heatmaps generated by CAM. ""","""All four reviewers recommend 'weak reject', citing weakness in the methodological novelty and experimental sufficiency. The AC concurs with the reviewing opinons.""" 83,"""On the limits of cross-domain generalization in automated X-ray prediction""","['Chest X-ray', 'Radiology', 'Deep Learning', 'Generalization']","""This large scale study focuses on quantifying what X-rays diagnostic prediction tasks generalize well across multiple different datasets. We present evidence that the issue of generalization is not due to a shift in the images but instead a shift in the labels. We study the cross-domain performance, agreement between models, and model representations. We find interesting discrepancies between performance and agreement where models which both achieve good performance disagree in their predictions as well as models which agree yet achieve poor performance. We also test for concept similarity by regularizing a network to group tasks across multiple datasets together and observe variation across the tasks.""","""This paper investigates generalization of automatic diagnosis across multiple different datasets. The analysis performed indicates that the main challenge in generalization is not shift in the image domain, but rather shifts in the label domain. The analysis is thorough and interesting, and the authors were very active in the rebuttal phase, providing additional clarifications about their paper.""" 84,"""Training Models 20X Faster in Medical Image Analysis""","['Medical image analysis', 'deep learning', 'segmentation.']","""Analyzing high-dimensional medical images (2D/3D/4D CT, MRI, histopathological images, etc.) plays an important role in many biomedical applications, such as anatomical pattern understanding, disease diagnosis, and treatment planning. The AI assisted models have been widely adopted in the domain of medical image analysis with great successes. However, training such models with large-size data is expensive in terms of computation and memory consumption. In this work, we provide solutions for improving model training efficiency, which will speed up the training of AI models (20X faster on an exemplary 3D segmentation framework), and enable researchers and radiologists to improve the efficiency in their clinical studies. The overall efficiency improvement comes from both improved algorithms and engineering advance.""","""All the reviewers agreed that the topic of this paper is interesting however they pointed out a lot of limitations for this paper such as the discussion about the overhead and preparation time of the proposed setup and comparison with other methods similar to the ones suggested by Reviewer 1. The authors did not submit a rebuttal to address the raised concerns, so I agree with the reviewers that the current draft has a lot of things that are unclear that does not make it ready for publication.""" 85,"""Breaking Speed Limits with Simultaneous Ultra-Fast MRI Reconstruction and Tissue Segmentation""","['fast MRI', 'task-based MRI reconstruction', 'multitask deep learning', '3D regression', '3D semantic segmentation', 'knee cartilage segmentation']","""Magnetic Resonance Image (MRI) acquisition, reconstruction and tissue segmentation are usually considered separate problems. This can be limiting when it comes to rapidly extracting relevant clinical parameters. In many applications, availability of reconstructed images with high fidelity may not be a priority as long as biomarker extraction is reliable and feasible. Built upon this concept, we demonstrate that it is possible to perform tissue segmentation directly from highly undersampled k-space and obtain quality results comparable to those in fully-sampled scenarios. We propose 'TB-recon', a 3D task-based reconstruction framework. TB-recon simultaneously reconstructs MRIs from raw data and segments tissues of interest. To do so, we devised a network architecture with a shared encoding path and two task-related decoders where features flow among tasks. We deployed TB-recon on a set of (up to pseudo-formula ) retrospectively undersampled MRIs from the Osteoarthritis Initiative dataset, where we automatically segmented knee cartilage and menisci. An experimental study was conducted showing the superior performance of the proposed method over a combination of a standard MRI reconstruction and segmentation method, as well as alternative deep learning based solutions. In addition, our ablation study highlighted the importance of skip connections among the decoders for the segmentation task. Ultimately, we conducted a reader study, where two musculoskeletal radiologists assessed the proposed models reconstruction performance.""","""This paper proposes a multi-task network to further break the speed limits. The idea is fine. However, the main concern I have is regarding the speed limits?Can you provide us what's the current speed limits of exisiting methods and what's your breakthrough with the proposed method? I suggest acceptance but would like to get the feedback from the authors. """ 86,"""Exploring Bayesian Deep Learning Uncertainty Measures for Segmentation of New Lesions in Longitudinal MRIs""","['Multiple Sclerosis', 'New and enlarging lesions', 'longitudinal MRI', 'Bayesian Deep Learning.']","""In this paper, we develop a modified U-Net architecture to accurately segment new and enlarging lesions in longitudinal MRI, based on multi-modal MRI inputs, as well as subtrac- tion images between timepoints, in the context of large-scale clinical trial data for patients with Multiple Sclerosis (MS). We explore whether MC-Dropout measures of uncertainty lead to confident assertions when the network output is correct, and are uncertain when incorrect, thereby permitting their integration into clinical workflows and downstream in- ference tasks.""","""While the reviewers agree that the paper is well written and that the application is relevant, they also share concerns on the novelty and presentation of this work.""" 87,"""Context Aware Convolutional Neural Networks for Segmentation of Aortic Dissection""","['aortic dissection', 'segmentation', 'convolutional neural networks', 'deep learning', '3D reconstruction']","""Three-dimensional (3D) reconstruction of patient-specific arteries is necessary for a variety of medical and engineering fields, such as surgical planning and physiological modeling. These geometries are created by segmenting and stacking hundreds (or thousands) of two-dimensional (2D) slices from a patient scan to form a composite 3D structure. However, this process is typically laborious and can take hours to fully segment each scan. Convolutional neural networks (CNNs) offer an attractive alternative to reduce the burden of manual segmentation, allowing researchers to reconstruct 3D geometries in a fraction of the time. We focused this work specifically on Stanford type B aortic dissection (TBAD), characterized by a tear in the descending aortic wall that creates two channels of blood flow: a normal channel called a true lumen and a pathologic new channel within the wall called a false lumen. While significant work has been dedicated to automated aortic segmentations, TBAD segmentations present unique challenges due to their irregular shapes, the need to distinguish between the two lumens, and patient to patient variability in the false lumen contrast. Here, we introduced a variation on the U-net architecture, where small stacks of slices are inputted into the network instead of individual 2D slices. This allowed the network to take advantage of contextual information present within neighboring slices. We compared and evaluated this variation with a variety of standard CNN segmentation architectures and found that our stacked input structure significantly improved segmentation accuracy for both the true and false lumen by more than ~12%. The resulting segmentations allowed for more accurate 3D reconstructions which closely matched our manual results.""","""The reviewers agree that the paper is well written and has a number of strengths, but they also agree that it has little methodological novelty and also point to several methodological flaws. The authors have provided extensive replies to the criticism, which I really appreciate. In these replies some results and modifications are promised, while it would be stronger if concrete text was proposed. Overall, my evaluation is that the modifications unfortunately do not seem to add major novelty or significantly change the manuscript. """ 88,"""Locating Cephalometric X-Ray Landmarks with Foveated Pyramid Attention""","['Deep learning', 'Landmark detection', 'Attention mechanism', 'Convolutional Neural Network', '2D X-ray cephalometric analysis', 'Image pyramid']","""CNNs, initially inspired by human vision, differ in a key way: they sample uniformly, rather than with highest density in a focal point. For very large images, this makes training untenable, as the memory and computation required for activation maps scales quadratically with the side length of an image. We propose an image pyramid based approach that extracts narrow glimpses of the of the input image and iteratively refines them to accomplish regression tasks. To assist with high-accuracy regression, we introduce a novel intermediate representation we call spatialized features. Our approach scales logarithmically with the side length, so it works with very large images. We apply our method to Cephalometric X-ray Landmark Detection and get state-of-the-art results.""","""Reviewers agree that this work presents an interesting methodological contribution on landmark localization, with three reviewers voting for acceptance (one was late with his review). Comments by the reviewers were extensively addressed and misconceptions clarified, thus leading to an improved manuscript already and making the contribution valuable for presentation at MIDL 2020.""" 89,"""Image Translation by Latent Union of Subspaces for Cross-Domain Plaque Segmentation""","['domain transfer', 'image translation', 'plaque', 'aortic calcification', 'deep learning', 'detection', 'CT']","""Calcified plaque in the aorta and pelvic arteries is associated with coronary artery calcification and is a strong predictor of heart attack. Current calcified plaque detection models show poor generalizeability to different domains (ie. pre-contrast vs. post-contrast CT scans). Many recent works have shown how cross domain object detection can be improved using an image translation model which translates between domains using a single shared latent space. However, while current image translation models do a good job preserving global/intermediate level structures they often have trouble preserving tiny structures. In medical imaging applications, preserving small structures is important since these structures can carry information which is highly relevant for disease diagnosis. Recent works on image reconstruction show that complex real-world images are better reconstructed using a union of subspaces approach. Since small image patches are used to train the image translation model, it makes sense to enforce that each patch be represented by a linear combination of subspaces which may correspond to the different parts of the body present in that patch. Motivated by this, we propose an image translation network using a shared union of subspaces constraint and show our approach preserves subtle structures (plaques) better than the conventional method. We further applied our method to a cross domain plaque detection task and show significant improvement compared to the state-of-the art method.""","""I agree with the reviewers that the paper is quite densely written, it is difficult to understand the details and what exactly the contribution of this work is, but I like that the paper presents image synthesis, which is evaluated with a clinically relevant application. This is an interesting approach for detection of arterial calcifications.""" 90,"""Pathology GAN: Learning deep representations of cancer tissue""","['Generative Adversarial Networks', 'Digital Pathology']","""We apply Generative Adversarial Networks (GANs) to the domain of digital pathology. Current machine learning research for digital pathology focuses on diagnosis, but we suggest a different approach and advocate that generative models could drive forward the understanding of morphological characteristics of cancer tissue. In this paper, we develop a framework which allows GANs to capture key tissue features and uses these characteristics to give structure to its latent space. To this end, we trained our model on 249K H&E breast cancer tissue images. We show that our model generates high quality images, with a Frechet Inception Distance (FID) of 16.65. We additionally assess the quality of the images with cancer tissue characteristics (e.g. count of cancer, lymphocytes, or stromal cells), using quantitative information to calculate the FID and showing consistent performance of 9.86. Additionally, the latent space of our model shows an interpretable structure and allows semantic vector operations that translate into tissue feature transformations. Furthermore, ratings from two expert pathologists found no significant difference between our generated tissue images from real ones.""","""Reviewers are in general positive about this paper. While there is some unclarity regarding application of this method, there is agreement that the analysis is thorough, and - adding to reviewers from my point of view- findings like the visualization of the latent space through interpolation and vector arithmetic is a very interesting feature. Although the methodological contribution is limited, the work seems to be carefully designed to prevent hallucination and provide a meaningful low dimensional representation of pathology relevant image features, thus making it worthwhile to be discussed at MIDL.""" 91,"""A Deep Learning based Fast Signed Distance Map Generation""","['Signed Distance Map', 'Deep Learning']","""Signed distance map (SDM) is a common representation of surfaces in medical image analysis and machine learning. The computational complexity of SDM for 3D parametric shapes is often a bottleneck in many applications, thus limiting their interest. In this paper, we propose a learning-based SDM generation neural network which is demonstrated on a tridimensional cochlea shape model parameterized by 4 shape parameters. The proposed SDM Neural Network generates a cochlea signed distance map depending on four input parameters and we show that the deep learning approach leads to a 60 fold improvement in the time of computation compared to more classical SDM generation methods. Therefore, the proposed approach achieves a good trade-off between accuracy and efficiency. ""","""Multiple reviewers found this a useful contribution on learning distance maps and the differences with prior work seem sufficient. Please in the final version explain the differences with respect to the references mentioned by AnonReviewer4.""" 92,"""End-to-end learning of convolutional neural net and dynamic programming for left ventricle segmentation""","['Differentiable programming', 'End-to-end learning', 'left ventricle segmentation']","""Differentiable programming is able to combine different functions or modules in a data processing pipeline with the goal of applying gradient descent-based end-to-end learning or optimization. A significant impediment to differentiable programming is the non-differentiable nature of some functions. We propose to overcome this difficulty by using neural networks to approximate such modules. An approximating neural network provides synthetic gradients (SG) for backpropagation across a non-differentiable module. Our design is grounded on a well-known theory that gradient of an approximating neural network can approximate a sub-gradient of a weakly differentiable function. We apply SG to combine convolutional neural network (CNN) with dynamic programming (DP) in end-to-end learning for segmenting left ventricle from short axis view of heart MRI. Our experiments show that end-to-end combination of CNN and DP requires fewer labeled images to achieve a significantly better segmentation accuracy than using only CNN.""","""This paper had interesting exchanges between the authors and the reviewers. Although the paper has its own limitations (for example, it does not outperform UNet when trained on the full datasets) it is not void of interest. I also found the rebuttal convincing. I thus slightly recommend this paper. """ 93,"""Learning to map between ferns with differentiable binary embedding networks""","['End-To-End Trainable Ferns', 'Network Efficiency', 'Binary Embedding']","""Current deep learning methods are based on the repeated, expensive application of convolutions with parameter-intensive weight matrices. In this work, we present a novel concept that enables the application of differentiable random ferns in end-to-end networks. It can then be used as multiplication-free convolutional layer alternative in deep network architectures. Our experiments on the binary classification task of the TUPAC'16 challenge demonstrate improved results over the state-of-the-art binary XNOR net and only slightly worse performance than its 2x more parameter intensive floating point CNN counterpart. ""","""The majority of reviewers acknowledge that the idea of using random ferns as a replacement for convolutions is worthwhile to investigate. There are some concerns regarding the presentation and the lack of clarity where gains in energy consumption come from. Overall, it seems the methodological idea is interesting and suitable to be discussed at MIDL 2020.""" 94,"""Spherical function regularization for parallel MRI reconstruction""","['Parallel MRI', 'spherical function', 'regularization', 'coil sensitivity', 'ADMM']","""From the optimization point of view, a difficulty with parallel MRI with simultaneous coil sensitivity estimation is the multiplicative nature of the non-linear forward operator: the image being reconstructed and the coil sensitivities compete against each other, causing the optimization process to be very sensitive to small perturbations. This can, to some extent, be avoided by regularizing the unknown in a suitably ``orthogonal'' fashion. In this paper, we introduce such a regularization based on spherical function bases. To perform this regularization, we represent efficient recurrence formulas for spherical Bessel functions and associated Legendre functions. Numerically, we study the solution of the model with non-linear ADMM. We perform various numerical simulations to demonstrate the efficacy of the proposed model in parallel MRI reconstruction.""","""The paper is out of scope of the conference since it is not correlated with deep learning. Furthermore, there have been ADMM model based deep learning methods. """ 95,"""A learning strategy for contrast-agnostic MRI segmentation""","['Segmentation', 'contrast independence', 'U-net', 'brain', 'MRI']","""We present a deep learning strategy for contrast-agnostic semantic segmentation of unpreprocessed brain MRI scans, without requiring additional training or fine-tuning for new modalities. Classical Bayesian methods address this segmentation problem with unsupervised intensity models, but require significant computational resources. In contrast, learning-based methods can be fast at test time, but are sensitive to the data available at training. Our proposed learning method, SynthSeg, leverages a set of training segmentations (no intensity images required) to generate synthetic scans of widely varying contrasts on the fly during training. These scans are produced using the generative model of the classical Bayesian segmentation framework, with randomly sampled parameters for appearance, deformation, noise, and bias field. Because each mini-batch has a different synthetic contrast, the final network is not biased towards any specific MRI contrast. We comprehensively evaluate our approach on four datasets comprising over 1,000 subjects and four MR contrasts. The results show that our approach successfully segments every contrast in the data, performing slightly better than classical Bayesian segmentation, and three orders of magnitude faster. Moreover, even within the same type of MRI contrast, our strategy generalizes significantly better across datasets, compared to training using real images. Finally, we find that synthesizing a broad range of contrasts, even if unrealistic, increases the generalization of the neural network. Our code and model are open source at pseudo-url.""","""A well-defined method for dealing with limited amount of ground truth (fully labeled) segmentation data which can be seen as a form of data augmentation. Validation on relatively close modalities weakens validity of the general claim that this method is entirely ""contrast agnostic"". For such a claim significantly larger variation of modalities should be studied (as admitted by the authors). It is not obvious if entirely ""contrast agnostic"" (modality independent) features exist as this implies that NN should learn the structure (anatomy), which current NN are not know to be good at. While the paper shows some promise for limited data variability, it is not obvious how this approach would scale. Perhaps, the authors can use their own generative framework to synthesize unseen modalities to validate their claim. From a practical point of view, the authors are strongly encouraged to include a more detailed analysis of T1 (as pointed out by R3). The authors are strongly encouraged to weaken their ""the first time"" claim. """ 96,"""Which MOoD Methods work? A Benchmark of Medical Out of Distribution Detection""","['Medical imaging', 'out-of-distribution detection', 'chest X-ray', 'fundus', 'histology']","""There is a rise in the use of deep learning for automated medical diagnosis, most notably in medical imaging. Such an automated system uses a set of images from a patient to diagnose whether they have a disease. However, systems trained for one particular domain of images cannot be expected to perform accurately on images of a different domain. These images should be filtered out by an Out-of-Distribution Detection (OoDD) method prior to diagnosis. This paper benchmarks popular OoDD methods in three domains of medical imaging: chest x-rays, fundus images and histology slides. Despite methods yielding good results on all three domains, they fail to recognize images close to the training distribution.""","""The initial opinions on the paper were split, with two reviewers suggesting 'Weak accept', one 'Strong reject' and a 'Weak reject'. I did read the paper myself and I share the concerns of reviewers 4 and 1 that it is not very well written and methods are not explained, or very late in the paper. For example the method Mahalanobis is mentioned in the figures, but not in the Methods section, among others. I also find, in addition to some of the reviewers the quality of the rebuttal lacking. For example, reviewer 1 mentions the term AEBCE which is used but never explained, in addition to other methods were the authors just respond that it is in the methods section, where it is not. As such, I lean towards rejection.""" 97,"""Single-Stage vs. Multi-Stage Machine Learning Algorithms for Prostate Segmentation in Magnetic Resonance Images""","['Machine Learning', 'Prostate Segmentation', 'Magnetic Resonance Imaging']","""Fusion of magnetic resonance images (MRI) with ultrasound has led to major improvements in precision diagnostics for prostate cancer. A key step in the fusion process is segmentation of the prostate in MRI and machine learning (ML) has proven to be a valuable tool for segmentation. In this paper, we compare two ML workflows for prostate segmentation; a single-stage and multi-stage ML algorithm to address the challenges of prostate segmentation.""","""The reviewers highlighted a lack of novelty and issue on the validation of the performance. Given the doubt on the proper split of the data for training and testing, I recomend this manuscript be rejected.""" 98,"""A Deep Learning Approach for Motion Forecasting Using 4D OCT Data ""","['4D Deep Learning', 'Optical Coherence Tomography', 'Motion Estimation', 'Motion Forecasting']","""Forecasting motion of a specific target object is a common problem for surgical interventions, e.g. for localization of a target region, guidance for surgical interventions, or motion compensation. Optical coherence tomography (OCT) is an imaging modality with a high spatial and temporal resolution. Recently, deep learning methods have shown promising performance for OCT-based motion estimation based on two volumetric images. We extend this approach and investigate whether using a time series of volumes enables motion forecasting. We propose 4D spatio-temporal deep learning for end-to-end motion forecasting and estimation using a stream of OCT volumes. We design and evaluate five different 3D and 4D deep learning methods using a tissue data set. Our best performing 4D method achieves motion forecasting with an overall average correlation coefficient of 97.41%, while also improving motion estimation performance by a factor of 2.5 compared to a previous 3D approach. ""","""All reviewers unanimously suggest acceptance of this paper that proposes novel methodological elements (in the context 4D deep learning) and also provides a good comparative evaluation. The small remaining concerns can easily be fixed, I therefore strongly recommend to accept the paper.""" 99,"""Assessing the validity of saliency maps for abnormality localization in medical imaging""","['Saliency maps', 'localization', 'anomaly detection', 'medical imaging', 'deep learning.']","""Saliency maps have become a widely used method to assess which areas of the input image are most pertinent to the prediction of a trained neural network. However, in the context of medical imaging, there is no study to our knowledge that has examined the efficacy of these techniques and quantified them using overlap with ground truth bounding boxes. In this work, we explored the credibility of the various existing saliency map methods on the RSNA Pneumonia dataset. We found that GradCAM was the most sensitive to model parameter and label randomization, and was highly agnostic to model architecture.""","""The interpretability of deep learning models is an important area of research. This work evaluates the usefulness of several methods that aim to visualize the decision making of a neural network. The reviewers are in agreement that the results presented here are of enough interest to warrant acceptance.""" 100,"""Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on 2.5 D Residual Squeeze and Excitation Deep Learning Model ""","['LGE-MRI', 'Deep learning', 'Segmentation of LGE-MRI', '2.5 D deep learning modeling']","""Cardiac left ventricular (LV) segmentation from short-axis MRI acquired 10 minutes after the injection of a contrast agent (LGE-MRI) is a necessary step in the processing allowing the identification and diagnosis of cardiac diseases such as myocardial infarction. However, this segmentation is challenging due to high variability across subjects and the potential lack of contrast between structures. Then, the main objective of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI. To this end, 2.5 D residual neural network integrated with a squeeze and excitation blocks in encoder side with specialized convolutional has been proposed. Late fusion has been used to merge the output of the best trained proposed models from a different set of hyperparameters. A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing. The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices. The overall Dice score was 82.01% by our proposed method as compared to Dice score of 83.22% obtained from the intra observer study. The proposed model could be used for the automatic segmentation of myocardial border that is a very important step for accurate quantification of no-reflow, myocardial infarction, myocarditis, and hypertrophic cardiomyopathy, among others.""","""The reviewers have provided detailed comments and listed many valid issues like modest description of previous work, lack of clarity and details regarding the method, and missing details about the reference standard, modest comparison with previous work. However, the addressed problem is clinically relevant, large data set was used which I consider a strength, and for a short paper relatively detailed analysis of the results is presented. If I compare well, the results achieved here are comparable with those reported on MS-CMRseg challenge. I think this would be a nice contribution to the conference.""" 101,"""A hierarchical fusion framework integrating random projection-based classifiers: application in head and neck squamous carcinoma cancer""","['ensemble method', 'random projection', 'fusion architecture', 'ensemble diversity']","""Ensemble methods achieves better performance than single classifier model. Classifier diversity and fusion architecture are equally important for building a successful multi-classifier system. In this study, we introduced random projection to obtain the required classifier diversity and then proposed a hierarchical framework, namely a novel hierarchical fusion integrating random projection diversified classifiers (HFRPC). The proposed hierarchical fusion scheme was validated on survival prediction of head and neck squamous carcinoma cancer (HNSCC). Experimental results have demonstrated the superiority of the proposed HFRPC framework over the base classifier member and the state-of-the-art benchmark ensemble methods, rendering it a potential tool to assist medical decision making in the practical clinical setting.""","""The paper presents an ensemble method for carcinoma cancer classification. Unfortunately, the description of the method is too short and vague to assess its theoretical validity, novelty, and relation to existing methods. """ 102,"""Random smooth gray value transformations for cross modality learning with gray value invariant networks""",[],"""Random transformations are commonly used for augmentation of the training data with the goal of reducing the uniformity of the training samples. These transformations normally aim at variations that can be expected in images from the same modality. Here, we propose a simple method for transforming the gray values of an image with the goal of reducing cross modality differences. This approach enables segmentation of the lumbar vertebral bodies in CT images using a network trained exclusively with MR images.""","""This paper proposes an intensity transformation method from MR to CT. The reviewers have major concerns on method evaluation, which only involves a few number of CT images and has no comparison with alternative methods; method setting, e.g., number of labels, modality transfer directions (MR to CT, but not CT to MR), network and training setting; and ability to generalize to other regions or modalities.""" 103,"""Fusing Structural and Functional MRIs using Graph Convolutional Networks for Autism Classification""","['Graph Convolutional Network', 'Neuroimaging', 'Autism Classification']","""Geometric deep learning methods such as graph convolutional networks have recently proven to deliver generalized solutions in disease prediction using medical imaging. In this paper, we focus particularly on their use in autism classification. Most of the recent methods use graphs to leverage phenotypic information about subjects (patients or healthy controls) as additional contextual information. To do so, metadata such as age, gender and acquisition sites are utilized to define intricate relations (edges) between the subjects. We alleviate the use of such non-imaging metadata and propose a fully imaging-based approach where information from structural and functional Magnetic Resonance Imaging (MRI) data are fused to construct the edges and nodes of the graph. To characterize each subject, we employ brain summaries. These are 3D images obtained from the 4D spatiotemporal resting-state fMRI data through summarization of the temporal activity of each voxel using neuroscientifically informed temporal measures such as amplitude low frequency fluctuations and entropy. Further, to extract features from these 3D brain summaries, we propose a 3D CNN model. We perform analysis on the open dataset for autism research (full ABIDE I-II) and show that by using simple brain summary measures and incorporating sMRI information, there is a noticeable increase in the generalizability and performance values of the framework as compared to state-of-the-art graph-based models.""","""3 out of 4 reviewers recommended weak acceptance of this work, while 1 reviewer recommended strong acceptance. After reading their comments and the corresponding answers given by the authors in their rebuttal, I think this work should be accepted for publication at MIDL. Please, when submitting the Camera Ready version, take into account the suggestions made by the reviewers which will improve the quality of your final submission.""" 104,"""Automated ultrasound assessment of amniotic fluid volume using deep learning""","['Deep learning', 'segmentation', 'amniotic fluid index', 'ultrasound image']","""The estimation of antenatal amniotic fluid (AF) volume (AFV) is important as it offers crucial information about fetal development, fetal well-being and perinatal prognosis. However, AFV measurement is cumbersome and patient specific; moreover, it is also sonographer dependent, with the accuracy of measurement varying greatly with experience. Therefore, the development of accurate, robust and adoptable methods to evaluate AFV is highly desirable. In this regard, automation is expected to reduce user-dependent variability and reduce the workload of sonographers. However, automating AFV measurement is very challenging, because accurate detection of AF pockets is difficult owing to various confusing factors, such as reverberation artifact, AF mimicking region and floating matter. Furthermore, they exhibit an unspecified variety of shapes and sizes, and ultrasound images often show missing or incomplete structural boundaries. Our proposed hierarchical deep-learning-based method comprises two steps, which considers clinicians' anatomical-knowledge-based approaches to overcome the abovementioned difficulties. The first step is the segmentation of the AF pocket using our proposed deep learning network, AF-net, to identify it. AF-net is a variation of U-net combined with three complementary concepts - atrous convolution, multi-scale side-input layer, and side-output layer. In the second step, the amniotic fluid index (AFI) is measured using the segmentation result from the first step. The experimental results demonstrate that the proposed method provides a measurement of AFI that is as robust and precise as the result of clinicians. The proposed method achieved the Dice similarity of 0.051 for AF segmentation and achieved an mean absolute error of 2.1000mm and a mean relative error of 0.0147$ for AFI value.""","""The reviewers point to a lack of novelty in the paper and to issues relating to the motivation and experiemental results. Some application-related merit was nonethless highlighted consistently in the reviews.""" 105,"""Continual Learning for Domain Adaptation in Chest X-ray Classification""","['Convolutional Neural Networks', 'Continual Learning', 'Catastrophic Forgetting', 'Chest X-Ray', 'ChestX-ray14', 'MIMIC-CXR', 'Joint Training', 'Elastic Weight Consolidation', 'Learning Without Forgetting']","""Over the last years, Deep Learning has been successfully applied to a broad range of medical applications. Especially in the context of chest X-ray classification, results have been reported which are on par, or even superior to experienced radiologists. Despite this success in controlled experimental environments, it has been noted that the ability of Deep Learning models to generalize to data from a new domain (with potentially different tasks) is often limited. In order to address this challenge, we investigate techniques from the field of Continual Learning (CL) including Joint Training (JT), Elastic Weight Consolidation (EWC) and Learning Without Forgetting (LWF). Using the ChestX-ray14 and the MIMIC-CXR datasets, we demonstrate empirically that these methods provide promising options to improve the performance of Deep Learning models on a target domain and to mitigate effectively catastrophic forgetting for the source domain. To this end, the best overall performance was obtained using JT, while for LWF competitive results could be achieved - even without accessing data from the source domain.""","""The paper has obtained mixed reviews3 weak accept (R1-R3) and 1 weak reject (R4). All reviewers appreciate the novel idea of applying continual learning to address a domain adaptation problem. The most negative R4 points out 'Lack of validation on a larger number of tasks/datasets', which the AC thinks that author makes a good rebuttal. Therefore, the ACs decide to downplay R4's comments and recommend to weak-accept the paper.""" 106,"""Interactive Tool for Nuclei Segmentation and Classification""","['Histopathology', 'Nucleus segmentation', 'Nucleus classi\x0ccation.']","""Object segmentation and classification in medical imaging are essential tasks for the diagnosis and understanding of diseases. Manual classification of the anatomical structures and the annotation of their boundaries are laborious tasks that also require strong medical expertise. Deep neural networks can be used to automate this task, but because of the unavailability of large datasets with multiple structures annotated and labeled, their performance is not at par with manual annotations. We propose a semi-automated interactive tool based on deep learning to produce high-quality annotations quickly. The architecture uses two convolutional networks: the first network produces multiple segmentations using a few clicks inside and outside the object, while the second classifies the object and selects one segmentation. We use MonuSAC histopathology data with four classes of labeled and annotated nuclei annotated as a testbed. On held-out images, our method was significantly more accurate in both segmentation and classification as compared to fully automated methods, while it was also at least 3 times faster as compared to manual annotation methods.""","""Most reviewers agree on the rejection of the paper based on the lacking description of the method. One reviewer rates the paper a 3 - weak accept, but also highlights the lack of important details and limited compared to the state-of-the-art. This is especially important as the paper is categorized as a well-validated application paper. This cannot easily be fixed in a revision and would require substantial changes, which are not feasible for a short paper. As such, I recommend rejection.""" 107,"""Improving Mammography Malignancy Segmentation by Designing the Training Process""","['Mammography', 'Segmentation', 'Malignancy Detection', 'Explainability']","""We work on the breast imaging malignancy segmentation task while focusing on the train- ing process instead of network complexity. We designed a training process based on a modified U-Net, increasing the overall segmentation performances by using both, benign and malignant data for training. Our approach makes use of only a small amount of anno- tated data and relies on transfer learning from a self-supervised reconstruction task, and favors explainability.""","""The paper tries to incorporate the unlabelled data with self-supervised learning into the training process. The major issue is lacking of the details. But since it is a short paper, so I suggest it for weak acceptance due to its novelty in the training process. The citations need to be properly updated to use the journal publication ones instead of archives. Please carefully go through the citations. """ 108,"""3D-RADNet: Extracting labels from DICOM metadata for training general medical domain deep 3D convolution neural networks""","['Transfer learning', 'Large dataset', 'data mining']","""Training deep convolution neural network requires a large amount of data to obtain good performance and generalisable results. Transfer learning approaches from datasets such as ImageNet had become important in increasing accuracy and lowering training samples required. However, as of now, there has not been a popular dataset for training 3D volumetric medical images. This is mainly due to the time and expert knowledge required to accurately annotate medical images. In this study, we present a method in extracting labels from DICOM metadata that information on the appearance of the scans to train a medical domain 3D convolution neural network. The labels include imaging modalities and sequences, patient orientation and view, presence of contrast agent, scan target and coverage, and slice spacing. We applied our method and extracted labels from a large amount of cancer imaging dataset from TCIA to train a medical domain 3D deep convolution neural network. We evaluated the effectiveness of using our proposed network in transfer learning a liver segmentation task and found that our network achieved superior segmentation performance (DICE=90.0%) compared to training from scratch (DICE=41.8%). Our proposed network shows promising results to be used as a backbone network for transfer learning to another task. Our approach along with the utilising our network, can potentially be used to extract features from large-scale unlabelled DICOM datasets.""","""Reviewers side on the 'accept' side. There is discussion and the authors are encouraged to update / clarify their paper in accordance. I recommend to accept.""" 109,"""Anatomical Predictions using Subject-Specific Medical Data""","['medical imaging', 'computer vision', 'prediction', 'registration', 'clinical', 'neural networks']","""Changes in brain anatomy can provide important insight for treatment design or scientific analyses. We present a method that predicts how brain anatomy for an individual will change over time. We model these changes through a diffeomorphic deformation field, and design a predictive function using convolutional neural networks. Given a predicted deformation field, a baseline scan can be warped to give a prediction of the brain scan at a future time. We demonstrate the method using the ADNI cohort, and analyze how performance is affected by model variants and the type of subject-specific information provided. We show that the model provides good predictions and that external clinical data can improve predictions. ""","""All reviewers have given a positive evaluation to the paper (2 strong accept, 2 weak accept). Therefore, I strongly recommend its acceptance.""" 110,"""Bayesian Generative Models for Knowledge Transfer in MRI Semantic Segmentation Problems""","['Brain Tumor Segmentation', 'Brain lesion segmentation', 'Transfer Learning', 'Variational Inference', 'Bayesian Neural Networks', 'Variational Autoencoder', '3D CNN']","""Automatic segmentation methods based on deep learning have recently demonstrated state-of-the-art performance, outperforming the ordinary methods. Nevertheless, these methods are inapplicable for small datasets, which are very common in medical problems. To this end, we propose a knowledge transfer method between diseases via the Generative Bayesian Prior network. Our approach is compared to a pre-train approach and random initialization and obtains the best results in terms of Dice Similarity Coefficient metric for the small subsets of the Brain Tumor Segmentation 2018 database (BRATS2018).""","""All the reviewers recommended acceptance of this work. I agree with them in that it is an interesting work and should be accepted as a short paper in MIDL 2020. The reviewers have raised a few points that would be interesting discussing in the final camera ready version. Please, when submitting the final manuscript, try to to address these points.""" 111,"""Joint Liver Lesion Segmentation and Classification via Transfer Learning""","['joint learning', 'liver lesions', 'lesion classification', 'lesion segmentation', 'CT']","""Transfer learning and joint learning approaches are extensively used to improve the performance of Convolutional Neural Networks (CNNs). In medical imaging applications in which the target dataset is typically very small, transfer learning improves feature learning while joint learning has shown effectiveness in improving the network's generalization and robustness. In this work, we study the combination of these two approaches for the problem of liver lesion segmentation and classification. For this purpose, 332 abdominal CT slices containing lesion segmentation and classification of three lesion types are evaluated. For feature learning, the dataset of MICCAI 2017 Liver Tumor Segmentation (LiTS) Challenge is used. Joint learning shows improvement in both segmentation and classification results. We show that a simple joint framework outperforms the commonly used multi-task architecture (Y-Net), achieving an improvement of 10% in classification accuracy, compared to 3% improvement with Y-Net.""","""The following quotes from the reviews demonstrate important critical points sufficient to justify reject. no rebuttal was provided to address any of them: - ""transfer, joint and multi-task learning are well known approaches to deal with limited data"",... ""the techniques are not new"", ""application of previous techniques"", ""no novel approach"" - ""motivation is quite weak"" - ""no comparison to existing approaches on liver lesion segmentation"" In summary, rejection is justified by lack of technical novelty, weak motivation, and lack of comparison. Most reviewers also pointed out some issues related to clarity and lack of details and focus.""" 112,"""Morphological Signature for Improvement of Weakly Supervised Segmentation of Quadriceps Muscles on Magnetic Resonance Imaging Data""","['automatic segmentation', 'machine learning', 'weakly supervised', 'magnetic resonance imaging', 'data augmentation']","""Automatic segmentation allows advancement in medical diagnosis and follow-up but remains a challenging task. Thanks to new machine learning approaches, this task tends to be more and more robust but still required many manual segmentations. Here we proposed to improve segmentation results obtained by multi-atlas segmentation with corrective learning (CL) approach using a selection of atlases based on morphological similarity to the image to process. We first introduce our morphological measurement dedicated for quadriceps segmentation of 3D T1 Water-only MR images and then use it to select closest atlases. Our results show that using few atlases (3 in lieu of 6) based on our morphological measurement improves segmentation quality and decrease computational time for multi-atlas segmentation with CL. Based on the measurements, we also defined a data augmentation strategy to train U-Net (a well-known and efficient deep learning segmentation approach), expecting better generalization capability, with very promising results.""","""While the reviewers agree that there is some value in the ""distance"" heuristic presented in this submission, they also agree that the novelty is limited and that the article is half baked and still needs work. Moreover, the authors failed to address the reviewers' comments in a rebuttal (which they didn't write at all).""" 113,"""Bayesian Learning of Probabilistic Dipole Inversion for Quantitative Susceptibility Mapping""","['Bayesian deep learning', 'variational inference', 'convolutional neural network', 'quantitative susceptibility mapping']","""A learning-based posterior distribution estimation method, Probabilistic Dipole Inversion (PDI), is proposed to solve quantitative susceptibility mapping (QSM) inverse problem in MRI with uncertainty estimation. A deep convolutional neural network (CNN) is used to represent the multivariate Gaussian distribution as the approximated posterior distribution of susceptibility given the input measured field. In PDI, such CNN is firstly trained on healthy subjects' data with labels by maximizing the posterior Gaussian distribution loss function as used in Bayesian deep learning. When testing on each patient' data without any label, PDI updates the pre-trained CNN's weights in an unsupervised fashion by minimizing the KullbackLeibler divergence between the approximated posterior distribution represented by CNN and the true posterior distribution given the likelihood distribution from known physical model and pre-defined prior distribution. Based on our experiments, PDI provides additional uncertainty estimation compared to the conventional MAP approach, meanwhile addressing the potential discrepancy issue of CNN when test data deviates from training dataset.""","""This paper proposes a Bayesian deep learning approach for solving Quantitative Susceptibility Mapping. All reviewers agree that the paper is well written and the ideas and experiments are novel and interesting. The validation is limited but enough in the opinion of the reviewers. """ 114,"""Laplacian pyramid-based complex neural network learning for fast MR imaging""","['Deep learning', 'complex convolution', 'Laplacian pyramid decomposition']","""A Laplacian pyramid-based complex neural network, CLP-Net, is proposed to reconstruct high-quality magnetic resonance images from undersampled k-space data. Specifically, three major contributions have been made: 1) A new framework has been proposed to explore the encouraging multi-scale properties of Laplacian pyramid decomposition; 2) A cascaded multi-scale network architecture with complex convolutions has been designed under the proposed framework; 3) Experimental validations on an open source dataset fastMRI demonstrate the encouraging properties of the proposed method in preserving image edges and fine textures.""","""The authors presented a robust rebuttal addressing the main concerns of the reviewers, providing more details and explanations about the method together with experiments in a public available dataset. Even if I agree with reviewer 3 that GAN-based reconstruction needs to be discussed more in the paper as it is used a lot from the community for reconstruction problems, I think that the methodology of the paper has merit and it can be interesting for the community. However, I also encourage the authors to incorporate all the answers to the reviewers in their final version.""" 115,"""DRMIME: Differentiable Mutual Information and Matrix Exponential for Multi-Resolution Image Registration""","['Image registration', 'mutual information', 'neural networks', 'differentiable programming', 'end-to-end optimization']","""In this work, we present a novel unsupervised image registration algorithm. It is differentiable end-to-end and can be used for both multi-modal and mono-modal registration. This is done using mutual information (MI) as a metric. The novelty here is that rather than using traditional ways of approximating MI which are often histogram based, we use a neural estimator called MINE and supplement it with matrix exponential for transformation matrix computation. The introduction of MINE tackles some of the drawbacks of histogram based MI computation and matrix exponential makes the optimization process smoother. We also introduce the idea of a multi-resolution loss, which makes the optimization process faster and more robust. This leads to improved results as compared to the standard algorithms available out-of-the-box in state-of-the-art image registration toolboxes, both in terms of time as well as registration accuracy, which we empirically demonstrate on publicly available datasets.""","""The paper presents a new image registration method. The method is developed based on mutual information (with MINE), matrix exponential for transformation matrix, and multi-resolution approach. Based on the reviews, replies and the paper, the proposed method is interesting and has been compared with different approaches. The replies from the authors have addressed most of the concerns (although not all) raised by the reviewers. """ 116,"""Towards Multiple Enhancement Styles Generation in Mammography""","['mammogram enhancement', 'deep learning']","""Mammography is a well-established imaging modality for early detection and diagnosis of breast cancer. The raw detector-obtained mammograms are difficult for radiologists to diagnose due to the similarity between normal tissues and potential lesions in the attenuation level and thus mammogram enhancement (ME) is significantly necessary. However, the enhanced mammograms obtained with different mammography devices can be diverse in visualization due to different enhancement algorithms adopted in these mammography devices. Different styles of enhanced mammograms can provide different information of breast tissue and lesion, which might help radiologists to screen breast cancer better. In this paper, we present a deep learning (DL) framework to achieve multiple enhancement styles generation for mammogram enhancement. The presented DL framework is denoted as DL-ME for simplicity. Specifically, the presented DL-ME is implemented with a multi-scale cascaded residual convolutional neural network (MSC-ResNet), in which the output in the coarser scale is used as a part of inputs in the finer scale to achieve optimal ME performance. In addition, a switch map is input into the DL-ME model to control the enhancement style of the outputs. To reveal the multiple enhancement styles generation ability of DL-ME for mammograms, clinical mammographic data from mammography devices of three different manufacturers are used in the work. The results show that the quality of the mammograms generated by our framework can reach the level of clinical diagnosis and enhanced mammograms with different styles can provide more information, which can help radiologists to efficiently screen breast cancers.""","""All reviewers of the paper wrote more or less the same message. The idea is simple, interesting and useful. However, there are many missing details about the training and testing processing. In particular, the machines used in the image acquisition, how the ground truth was obtained, the image resolution used, and if there were pathologies present in the images. Moreover, the assessment based on two experts and a small number of images did not seem to convince the reviewers. Even though short papers do not need extensive validations, it must provide enough preliminary evidence to show that the proposed method has potential. Considering these issues, I agree with the reviewers on their weak reject rating. """ 117,"""Classification of Epithelial Ovarian Carcinoma Whole-Slide Pathology Images Using Deep Transfer Learning""","['Transfer learning', 'Ovarian cancer', 'Digital pathology']","""Ovarian cancer is the most lethal cancer of the female reproductive organs. There are pseudo-formula major histological subtypes of epithelial ovarian cancer, each with distinct morphological, genetic, and clinical features. Currently, these histotypes are determined by a pathologist's microscopic examination of tumor whole-slide images (WSI). This process has been hampered by poor inter-observer agreement (Cohens kappa pseudo-formula - pseudo-formula ). We utilized a two-stage deep transfer learning algorithm based on convolutional neural networks (CNN) and progressive resizing for automatic classification of epithelial ovarian carcinoma WSIs. The proposed algorithm achieved a mean accuracy of pseudo-formula and Cohen's kappa of pseudo-formula in the slide-level classification of pseudo-formula WSIs; performing better than a standard CNN and pathologists without gynecology-specific training. ""","""All reviewers agree that the paper is strong enough for acceptance. They highlight the interesting methodological developments, the well though-out experiments, and the easy-to-follow description. The results look promising as well.""" 118,"""Synthesizing lesions using contextual GANs improves breast cancer classification on mammograms""","['mammography', 'gan', 'data augmentation', 'cancer']","""Data scarcity and class imbalance are two fundamental challenges in many machine learning applications to healthcare. Breast cancer classification in mammography exemplifies these challenges, with a malignancy rate of around 0.5% in a screening population, which is compounded by the relatively small size of lesions (~1% of the image) in malignant cases. Simultaneously, the prevalence of screening mammography creates a potential abundance of non-cancer exams to use for training. Altogether, these characteristics lead to overfitting on cancer cases, while under-utilizing non-cancer data. Here, we present a novel generative adversarial network (GAN) model for data augmentation that can realistically synthesize and remove lesions on mammograms. With self-attention and semi-supervised learning components, the U-net-based architecture can generate high resolution (256x256px) outputs, as necessary for mammography. When augmenting the original training set with the GAN-generated samples, we find a significant improvement in malignancy classification performance on a test set of real mammogram patches. Overall, the empirical results of our algorithm and the relevance to other medical imaging paradigms point to potentially fruitful further applications.""","""The paper has been reviewed by three experts in the field that list a fair number of positive and negative points. On the positive side, they all agree that the paper addresses an important problem and is well written. On the negative side, the paper seems to have very little technical/methodological contribution beyond the combination of well-understood techniques applied to the problem of mammogram generation. Such issue could be compensated by a careful ablation study and thorough comparisons with other similar data augmentation methods recently proposed, but these two points are addressed only partially by the paper. Initial reviews were quite balanced between accept and reject, but after rebuttal, the reviewers seem to be leaning towards rejection. My opinion is that the paper is good, but it needs to address the issues identified by the reviewers before it can be published.""" 119,"""A deep learning-based pipeline for error detection and quality control of brain MRI segmentation results""","['brain MRI', 'segmentation', 'quality control', 'GANs', '3D CNN']","""Brain MRI segmentation results should always undergo a quality control (QC) process, since automatic segmentation tools can be prone to errors. In this work, we propose two deep learning-based architectures for performing QC automatically. First, we used generative adversarial networks for creating error maps that highlight the locations of segmentation errors. Subsequently, a 3D convolutional neural network was implemented to predict segmentation quality. The present pipeline was shown to achieve promising results and, in particular, high sensitivity in both tasks.""","""The question of segmentation QC is very important. However, some aspects of the validation protocol remains questionable (e.g. correct separation between testing/training).""" 120,"""KD-MRI: A knowledge distillation framework for image reconstruction and image restoration in MRI workflow""","['MRI workflow', 'Model compression', 'Knowledge distillation', 'MRI reconstruction', 'MRI super resolution.']","""Deep learning networks are being developed in every stage of the MRI workflow and have provided state-of-the-art results. However, this has come at the cost of increased computation requirement and storage. Hence, replacing the networks with compact models at various stages in the MRI workflow can significantly reduce the required storage space and provide considerable speedup. In computer vision, knowledge distillation is a commonly used method for model compression. In our work, we propose a knowledge distillation (KD) framework for the image to image problems in the MRI workflow in order to develop compact, low-parameter models without a significant drop in performance. We propose a combination of the attention-based feature distillation method and imitation loss and demonstrate its effectiveness on the popular MRI reconstruction architecture, DC-CNN. We conduct extensive experiments using Cardiac, Brain, and Knee MRI datasets for 4x, 5x and 8x accelerations. We observed that the student network trained with the assistance of the teacher using our proposed KD framework provided significant improvement over the student network trained without assistance across all the datasets and acceleration factors. Specifically, for the Knee dataset, the student network achieves pseudo-formula parameter reduction, 2x faster CPU running time, and 1.5x faster GPU running time compared to the teacher. Furthermore, we compare our attention-based feature distillation method with other feature distillation methods. We also conduct an ablative study to understand the significance of attention-based distillation and imitation loss. We also extend our KD framework for MRI super-resolution and show encouraging results. ""","""The paper presents a method for MRI reconstruction of undersampled k-space using a knowledge distillation approach. According to the 3 reviewers, the paper is well written although too long for a MIDL paper. Method novelty is limited as it seems a combination of existing approaches. From an applicability point of view, the reviewers and I are concern about the drop in quality for such a modest speed-up factor. """ 121,"""Multi-view Framework for Histomorphologic Classification""","['Classification', 'Prostate Cancer', 'Digital Pathology']","""Current routine histopathologic evaluation of prostate cancer does not fully account for some individual morphology patterns associated with poor outcome. Pathologists evaluate and score morphology across multiple magnifications, motivating deep learning methods to incorporate various resolutions. We have evaluated a proof-of-concept multi-view framework to classify high risk morphology architectures that does not rely on ensemble-based techniques of multi-magnification models.""","""The majority of reviewers recommended reject based on lack of details and proper comparison. There was no rebuttal submitted.""" 122,"""A deep learning approach to segmentation of the developing cortex in fetal brain MRI with minimal manual labeling""","['Fetal', 'developing', 'brain', 'cortex', 'gray matter', '3D segmentation', 'deep learning']","""We developed an automated system based on deep neural networks for fast and sensitive 3D image segmentation of cortical gray matter from fetal brain MRI. The lack of extensive/publicly available annotations presented a key challenge, as large amounts of labeled data are typically required for training sensitive models with deep learning. To address this, we: (i) generated preliminary tissue labels using the Draw-EM algorithm, which uses Expectation-Maximization and was originally designed for tissue segmentation in the neonatal domain; and (ii) employed a human-in-the-loop approach, whereby an expert fetal imaging annotator assessed and refined the performance of the model. By using a hybrid approach that combined automatically generated labels with manual refinements by an expert, we amplified the utility of ground truth annotations while immensely reducing their cost (283 slices). The deep learning system was developed, refined, and validated on 249 3D T2-weighted scans obtained from the Developing Human Connectome Project's fetal cohort, acquired at 3T. Analysis of the system showed that it is invariant to gestational age at scan, as it generalized well to a wide age range (21 38 weeks) despite variations in cortical morphology and intensity across the fetal distribution. It was also found to be invariant to intensities in regions surrounding the brain (amniotic fluid), which often present a major obstacle to the processing of neuroimaging data in the fetal domain. ""","""Interesting paper on fetal MRI segmentation with minimal labeling requirements. Productive use of the rebuttal period. The reviewers appear largely in agreement after discussion. """ 123,"""Tackling the Problem of Large Deformations in Deep Learning Based Medical Image Registration Using Displacement Embeddings""","['deformable image registration', 'convolutional neural networks', 'thoracic CT']","""Though, deep learning based medical image registration is currently starting to show promising advances, often, it still fells behind conventional frameworks in terms of reg- istration accuracy. This is especially true for applications where large deformations exist, such as registration of interpatient abdominal MRI or inhale-to-exhale CT lung registra- tion. Most current works use U-Net-like architectures to predict dense displacement fields from the input images in different supervised and unsupervised settings. We believe that the U-Net architecture itself to some level limits the ability to predict large deformations (even when using multilevel strategies) and therefore propose a novel approach, where the input images are mapped into a displacement space and final registrations are reconstructed from this embedding. Experiments on inhale-to-exhale CT lung registration demonstrate the ability of our architecture to predict large deformations in a single forward path through our network (leading to errors below 2 mm).""","""The paper presents an extension of recent work based on keypoint-based CNN registration. All reviewers agree that the work is a incremental improvement over recent method (Heinrich et al, etc), and that a lot more work should be done before the work is mature. Nevertheless., as this is a short paper, the ideas presented and the thorough comparison make it borderline acceptable for MIDL short paper track. I strongly recommend that the authors address the reviewer worries by the conference time.""" 124,"""Feature Disentanglement to Aid Imaging Biomarker Characterization for Genetic Mutations""","['Generative Adversarial Networks', 'Feature Disentanglement', 'Brain Tumor', 'Magnetic Resonance Imaging', 'Limited Dataset']","""Various mutations have been shown to correlate with prognosis of High-Grade Glioma (Glioblastoma). Overall prognostic assessment requires analysis of multiple modalities: imaging, molecular and clinical. To optimize this assessment pipeline, this paper develops the first deep learning-based system that uses MRI data to predict 19/20 co-gain, a mutation that indicates median survival. The paper addresses two key challenges when dealing with deep learning algorithms and medical data: lack of data and high data imbalance. We propose an unified approach that consists of a Feature Disentanglement (FeaD-GAN) technique for generating synthetic images to address these challenges, that projects features and re-samples from a pseudo-larger data distribution to generate synthetic images from very limited data. A thorough analysis is performed to (a) characterize aspects of visual manifestation of 19/20 co-gain that demonstrates the effectiveness of FeaD-GAN and (b) demonstrate that not only do the imaging biomarkers of 19/20 co-gain exist, but they are reproducible as well.""","""The papers presents an original application of feature disentanglement GANs to characterize the manifestation of 19/20 co-gain in MRI of glioblastoma patients. The proposed approach is used as data augmentation to overcome the problem of limited and unbalanced data related to this genetic mutation. Experiments show that the model can reproduce imaging biomarkers relevant to 19/20 co-gain. In their rebuttal, authors have answered the reviewers' main concerns. """ 125,"""Pulmonary Nodule Malignancy Classification Using its Temporal Evolution with Two-Stream 3D Convolutional Neural Networks""","['Lung Cancer', 'Pulmonary Nodule Malignancy', 'Convolutional Neural Networks']","""Nodule malignancy assessment is a complex, time-consuming and error-prone task. Current clinical practice requires measuring changes in size and density of the nodule at different time-points. State of the art solutions rely on 3D convolutional neural networks built on pulmonary nodules obtained from a single CT scan per patient. In this work, we propose a two-stream 3D convolutional neural network that predicts malignancy by jointly analyzing two pulmonary nodule volumes from the same patient taken at different time-points. Best results achieve 77% of F1-score in test with an increment of 9% and 12% of F1-score with respect to the same network trained with images from a single time-point.""","""The reviews as well as myself agree that studying longitudinal scans for nodule malignancy classification is interesting and valuable. The paper is well written and clearly presented.""" 126,"""Iterative reconstruction artefact removal using null-space networks""","['null space.neural network', 'data consistency', 'image reconstruction', 'PET']","""Incorporation of resolution modelling (RM) into iterative reconstruction produces Gibbs ringing artefacts which adversely affect clinically used metrics such as SUV pseudo-formula . We propose the use of a null space network as a regulariser to compensate for these artefacts without introducing bias.""","""The paper lacks novelty and real scenario validation. Even if this is a short paper, the paper still lacks susbtantial details that are needed for the readers to understand both the theoretical and experimental contributions. """ 127,"""Slice-level Detection of Intracranial Hemorrhage on CT Using Deep Descriptors of Adjacent Slices""","['CNNs', 'CT', 'Intracranial Hemorrhage Detection']","""This paper proposes a new strategy to train slice-level classifiers on CT based on the descriptors of the adjacent slices along the axis, each of which is extracted through a convolutional neural network (CNN). This method aims to predict the presence of ICH and classify it into 5 different sub-types. We exploit a two-stage training scheme. In the first stage, we treat a CT scan simply as a set of 2D images and train a state-of-the-art CNN classifier that was pretrained on ImageNet. During the training, each slice is sampled together with the 3 slices before and the 3 slices after it, which makes the batch size a multiple of 7. In the second stage, the output descriptors of each block of 7 consecutive slices obtained from stage 1 are stacked into an image and fed to another CNN for final prediction of the middle slice. Our model is entirely trained on the RSNA dataset and additionally evaluated on the CQ500 dataset, which adopts the same a set of labels but only on study level. We obtain a single model in the top 4% best-performing solutions of the RSNA ICH challenge, where model ensembles are allowed. Experiments also show that the proposed method significantly outperforms the baseline model on CQ500.""","""The work is generally seen to not have much technical novelty and also to exhibit significant limitations such as the lack of validation and missing baseline comparisons. This makes it difficult to recommend acceptance.""" 128,"""Quantifying the Value of Lateral Views in Deep Learning for Chest X-rays""","['convolutional neural networks', 'chest x-rays', 'lateral views', 'multi-label classification']","""Most deep learning models in chest X-ray prediction utilize the posteroanterior (PA) view due to the lack of other views available. PadChest is a large-scale chest X-ray dataset that has almost 200 labels and multiple views available. In this work, we use PadChest to explore multiple approaches to merging the PA and lateral views for predicting the radiological labels associated with the X-ray image. We find that different methods of merging the model utilize the lateral view differently. We also find that including the lateral view increases performance for 32 labels in the dataset, while being neutral for the others. The increase in overall performance is comparable to the one obtained by using only the PA view with twice the amount of patients in the training set.""","""There seems to be general consensus that this paper presents an interesting study but with very limited methodological contribution. """ 129,"""Fast Mitochondria Detection for Connectomics""","['mitochondria detection', 'connectomics', 'image segmentation', 'biomedical imaging']","""High-resolution connectomics data allows for the identification of dysfunctional mitochondria which are linked to a variety of diseases such as autism or bipolar. However, manual analysis is not feasible since datasets can be petabytes in size. We present a fully automatic mitochondria detector based on a modified U-Net architecture that yields high accuracy and fast processing times. We evaluate our method on multiple real-world connectomics datasets, including an improved version of the EPFL mitochondria benchmark. Our results show a Jaccard index of up to 0.90 with inference times lower than 16ms for a 512x512px image tile. This speed is faster than the acquisition speed of modern electron microscopes, enabling mitochondria detection in real-time. Compared to previous work, our detector ranks first for real-time detection and can be used for image alignment. Our data, results, and code are freely available. ""","""Despite some criticism due to limited novelty, the reviewers mostly agree that the authors' work can act as a useful baseline reference for future research in connectomics given that they released new annotations and code for reproducibility. The rebuttal sufficiently addresses the negative review.""" 130,"""Automatic segmentation of the pulmonary lobes with a 3D u-net and optimized loss function""","['pulmonary lobes', 'lung lobes', 'segmentation', 'deep learning', 'CNN', '3D U-net']","""Fully-automatic lung lobe segmentation is challenging due to anatomical variations, pathologies, and incomplete fissures. We trained a 3D u-net for pulmonary lobe segmentation on 49 mainly publically available datasets and introduced a weighted Dice loss function to emphasize the lobar boundaries. To validate the performance of the proposed method we compared the results to two other methods. The new loss function improved the mean distance to 1.46 mm (compared to 2.08 mm for simple loss function without weighting).""","""Reviewers are generally in favor of the methodological contribution of this paper. However, two reviewers complain that there is related work from Gerard et al. in TMI which actually proposes to use fissures for lobe segmentation in a deep learning based framework. The work presented here reproduces the validity of this approach, which is a beneficial finding and thus interesting to report at MIDL 2020. However, these critical reviewer comments are crucial and need to be addressed in a final version of the paper. """ 131,"""DeepRetinotopy: Predicting the Functional Organization of Human Visual Cortex from Structural MRI Data using Geometric Deep Learning""","['fMRI', 'retinotopy', 'visual hierarchy', 'cortical folding', 'manifold', 'surface model']","""Whether it be in a man-made machine or a biological system, form and function are often directly related. In the latter, however, this particular relationship is often unclear due to the intricate nature of biology. Here we developed a geometric deep learning model capable of exploiting the actual structure of the cortex to learn the complex relationship between brain function and anatomy from structural and functional MRI data. Our model was not only able to predict the functional organization of human visual cortex from anatomical properties alone, but it was also able to predict nuanced variations across individuals.""","""The reviewers mostly appear in agreement that this is a well written paper with an interesting application and promising results within the MIDL domain. """ 132,"""Poolability and Transferability in CNN. A Thrifty Approach""","['Deep Learning', 'Receptive Field', 'Semantic Segmentation']","""The current trend in deep learning models for semantic segmentation are ever increasing model sizes. These large models need huge data-sets to be trained properly. However medical applications often offer only small data-sets available and require smaller models. A large part of these models' parameters is due to their multi-resolution approach for increasing the receptive field, i.e. alternating convolution and pooling layers for feature extraction. In this work an alternative parameter free approach is proposed to increase the receptive field. This significantly reduces the number of parameters needed in semantic segmentation models and allows them to be trained on smaller data-sets.""","""The reviewers had many critical comments about the comparison to state-of-the-art and missing references to literature. The authors strongly argue against them in their rebuttal and make some exaggerated statements, there have been alternative strategies to (non-paramterically) increase the receptive field with shallower networks, e.g. sparse 3D deformable convolutions (MIDL 2018) and learnable dilatation networks (GCPR 2017) along the mentioned PSP. The statement that average pooling (with large kernels) is more computationally expensive than max pooling is also incorrect. As mentioned by Reviewer#1 there are many different ways of limiting the parameter count of U-nets (so the statement of exponentially growing number of weights with increasing levels) does also not really hold. The idea may have some merit if evaluated and discussed more carefully, but unfortunately, the abovementioned negative aspects (and the generally low performance) reduce the enthusiasm for this approach.""" 133,"""A Cross-Stitch Architecture for Joint Registration and Segmentation in Adaptive Radiotherapy""","['Joint Registration and Segmentation', 'Multi-Organ Segmentation', 'Deformable Registration', 'Adaptive Radiotherapy', 'Contour Propagation', 'Convolutional Neural Networks (CNN)', 'Multi-Task Learning (MTL)']","""Recently, joint registration and segmentation has been formulated in a deep learning setting, by the definition of joint loss functions. In this work, we investigate joining these tasks at the architectural level. We propose a registration network that integrates segmentation propagation between images, and a segmentation network to predict the segmentation directly. These networks are connected into a single joint architecture via so-called cross- stitch units, allowing information to be exchanged between the tasks in a learnable manner. The proposed method is evaluated in the context of adaptive image-guided radiotherapy, using daily prostate CT imaging. Two datasets from different institutes and manufacturers were involved in the study. The first dataset was used for training (12 patients) and validation (6 patients), while the second dataset was used as an independent test set (14 patients). In terms of mean surface distance, our approach achieved 1.06 0.3 mm, 0.91 0.4 mm, 1.27 0.4 mm, and 1.76 0.8 mm on the validation set and 1.82 2.4 mm, 2.45 2.4 mm, 2.45 5.0 mm, and 2.57 2.3 mm on the test set for the prostate, bladder, seminal vesicles, and rectum, respectively. The proposed multi-task network outperformed single-task networks, as well as a network only joined through the loss function, thus demonstrating the capability to leverage the individual strengths of the segmentation and registration tasks. The obtained performance as well as the inference speed make this a promising candidate for daily re-contouring in adaptive radiotherapy, potentially reducing treatment-related side effects and improving quality-of-life after treatment.""","""All the reviewers recommended acceptance of this work. After reading their comments and discussion with the authors, I think this work should be accepted for publication at MIDL. Please, when submitting the Camera Ready version, take into account the suggestions made by the reviewers.""" 134,"""Cascaded Deep Neural Networks for Retinal Layer Segmentation of Optical Coherence Tomography with Fluid Presence""","['retinal layer segmentation', 'optical coherence tomography', 'fully convolutional network']","""Optical coherence tomography (OCT) is a non-invasive imaging technology that can provide micrometer-resolution cross-sectional images of the inner structures of the eye. It is widely used for the diagnosis of ophthalmic diseases with retinal alteration, such as layer deformation and fluid accumulation. In this paper, a novel framework was proposed to segment retinal layers with fluid presence. The main contribution of this study is two folds: 1) we developed a cascaded network framework to incorporate the prior structural knowledge; 2) we proposed a novel deep neural network based on U-Net and fully convolutional network, termed LF-UNet. Cross validation experiments proved that the proposed LF-UNet has superior performance comparing with the state-of-the-art methods, and incorporating the relative distance map structural prior information could further improve the performance regardless of the network.""","""There is a consensus among the reviewers (3 out of 4) in that this paper has merits to be accepted at MIDL. Reviewers generally consider that this works proposes a novel and valuable contribution to retinal image analysis community. While the experimental setting received some negative concerns during the initial reviewing process, the authors have positively addressed many of the comments. Taking into consideration both the initial and latest comments from the reviewers I recommend this paper for publication ('weak accept').""" 135,"""Joint Learning of Vessel Segmentation and Artery/Vein Classification with Post-processing""","['medical imaging', 'retina images', 'vessel segmentation', 'vessel classification', 'deep learning', 'computer vision']","""Retinal imaging serves as a valuable tool for diagnosis of various diseases. However, reading retinal images is a difficult and time-consuming task even for experienced specialists. The fundamental step towards automated retinal image analysis is vessel segmentation and artery/vein classification, which provide various information on potential disorders. To improve the performance of the existing automated methods for retinal image analysis, we propose a two-step vessel classification. We adopt a UNet-based model, SeqNet, to accurately segment vessels from the background and make prediction on the vessel type. Our model does segmentation and classification sequentially, which alleviates the problem of label distribution bias and facilitates training. To further refine classification results, we post-process them considering the structural information among vessels to propagate highly confident prediction to surrounding vessels. Our experiments show that our method improves AUC to 0.98 for segmentation and the accuracy to 0.92 in classification over DRIVE dataset.""","""The manuscript introduced a unified segmentation and classification framework for retinal vessels that is also able to do A/V separation. Two authors are supportive while a third is less so. However, some of the critiques by the unsupportive reviewers are less argued and justified than the positives of supporting reviewers. In addition, the authors have done a fair amount of very good work in addressing the concerns of the reviewers. They undertook additional experiments and adddressed each of the reviewers's points in turn and improving the actual manuscript. I am supportive of the acceptance and believe this is a paper somewhere between a 3 and a 4.""" 136,"""Unsupervised learning of multimodal image registration using domain adaptation with projected Earth Movers discrepancies""",[],"""Multimodal image registration is a very challenging problem for deep learning approaches. Most current work focuses on either supervised learning that requires labelled training scans and may yield models that bias towards annotated structures or unsupervised approaches that are based on hand-crafted similarity metrics and may therefore not outperform their classical non-trained counterparts. We believe that unsupervised domain adaptation can be beneficial in overcoming the current limitations for multimodal registration, where good metrics are hard to define. Domain adaptation has so far been mainly limited to classification problems. We propose the first use of unsupervised domain adaptation for discrete multimodal registration. Based on a source domain for which quantised displacement labels are available as supervision, we transfer the output distribution of the network to better resemble the target domain (other modality) using classifier discrepancies. To improve upon the sliced Wasserstein metric for 2D histograms, we present a novel approximation that projects predictions into 1D and computes the L1 distance of their cumulative sums. Our proof-of-concept demonstrates the applicability of domain transfer from mono- to multimodal 2D registration of canine MRI scans and improves the registration accuracy from 33% (using sliced Wasserstein) to 44%.""","""Two reviewers recommend weak acceptance while the other two reviewers recommend weak rejection. However, most of them seem to acknowledge the novelty of the paper. Since this is a short paper submission which seems to introduce a novel idea, I'm therefore inclined for accepting this publication. Importantly, for the camera ready version, please take into account the comments about the poor english quality raised by R4 and those related to how the organization of the paper could be improved made by R3. I suggest the authors to ask a native english speaker or an expert to proofread their manuscript.""" 137,"""Multi-Task Deep Learning: Simultaneous Segmentation and Survival Analysis via Cox Proportional Hazards Regression""","['Multi-task learning', 'Survival analysis', 'Cox regression', 'Segmentation']","""Multi-task learning has taken an important place as a tool for medical image analysis, namely for the development of predictive models of disease. This study aims at developing a new deep learning model for simultaneous segmentation and survival regression, using a version of the Cox model to support the learning process. We use a combination of a 2D U-net and a residual network to minimize a combined loss function for segmentation and survival regression. To validate our method, we created a simple synthetic data set - the model segments circles of different sizes and regresses the area of circles. The main motivation of this work is to create a workflow for segmentation and regression for medical images application: in specific, we use this model to segment lesions or organs and regress clinical outcomes as overall or disease-free survival.""","""The authors propose a multi-task architecture to simultaneously perform image segmentation and survival analysis. The interest of the problem has been acknowledged by the reviewers. The major flaw of the paper, as noted by all reviewers, is the experimental part: results are obtained on a synthetic dataset only (circle randomly located in the image), which appears to be too simple w.r.t. the real problem at hand. Also it is not clear how the synthetic regression problem, ie predicting the circle area, is related to the probability of survival. The other point concerns the regression loss: it is unclear why a partial log-likelihood loss function is used, whereas standard regression losses (mse, mae) could be assessed in the first place. In addition to reviewing these two points, I suggest the authors to enhance the paper by (i) highlighting clearly the performance of the single task network, (ii) performing statistical analysis to show the superiority of the proposed architecture, (iii) improving the quality of Kaplan-Meir curves in Figure 2.""" 138,"""MAC-ReconNet: A Multiple Acquisition Context based Convolutional Neural Network for MR Image Reconstruction using Dynamic Weight Prediction""","['Multiple acquisition contexts', 'Dynamic weight prediction', 'MRI reconstruction']","""Convolutional Neural network based MR reconstruction methods have shown to provide fast and high quality reconstructions. A primary drawback with a CNN-based model is that it lacks flexibility and can effectively operate only for a specific acquisition context limiting practical applicability. By acquisition context, we mean a specific combination of three input settings considered namely, the anatomy under study, undersampling mask pattern and acceleration factor for undersampling. The model could be trained jointly on images combining multiple contexts. However the model does not meet the performance of context specific models nor extensible to contexts unseen at train time. This necessitates a modification to the existing architecture in generating context specific weights so as to incorporate flexibility to multiple contexts. We propose a multiple acquisition context based network, called MAC-ReconNet for MRI reconstruction, flexible to multiple acquisition contexts and generalizable to unseen contexts for applicability in real scenarios. The proposed network has an MRI reconstruction module and a dynamic weight prediction (DWP) module. The DWP module takes the corresponding acquisition context information as input and learns the context-specific weights of the reconstruction module which changes dynamically with context at run time. We show that the proposed approach can handle multiple contexts based on Cardiac and Brain datasets, Gaussian and Cartesian undersampling patterns and five acceleration factors. The proposed network outperforms the naive jointly trained model and gives competitive results with the context-specific models both quantitatively and qualitatively. We also demonstrate the generalizability of our model by testing on contexts unseen at train time.""","""Generally a good paper to address the generalization issue of deep learning method. The main contribution lies in the introduction of a dynamic weight prediction (DWP) module. The only concern I have is that the authors should avoid overcliams since the investigation of anatomy and sampling patterns is limited. """ 139,"""Priority Unet: Detection of Punctuate White Matter Lesions in Preterm Neonate in 3D Cranial Ultrasonography""","['Soft attention', 'U-Net', 'Detection', '3D Ultrasound', 'Preterm Neonates brain imaging', 'Ponctuate white matter lesion']","""Brain damage, particularly of cerebral white matter (WM), observed in premature infants in the neonatal period is responsible for frequent neurodevelopmental sequelae in early childhood and [V Pierrat et al. EPIPAGE-2 cohort study. BMJ. 2017]. Punctuate white matter lesions (PWML) are most frequent WM abnormalities, occurring in 1835% of all preterm infants [AL Nguyen et al. Int Journal of Developmental Neuroscience, 2019] [N. Tusor et al, Scientific Reports, 2017]. Accurately assessing the volume and localisation of these lesions at the early postnatal phase can help paediatricians adapting the therapeutic strategy and potentially reduce severe sequelae. MRI is the gold standard neuroimaging tool to assess minimal to severe WM lesions, but it is only rarely performed for cost and accessibility reasons. Cranial ultrasonography (cUS) is a routinely used tool, however, the visual detection of PWM lesions is challenging and time consuming because these lesions are small with variable contrast and no specific pattern. There are also weak anatomical landmarks in neonate brains as the brain structures are moving and not fully developed. Research on automatic detection of PWML on MRI based on standard image analysis was initiated by Mukherjee [Mukherjee, S. et al. MBEC 57(1), 71-87, 2019]. One other team has recently tackled this issue based on deep architectures [Y Liu et al. MICCAI 2019]. Despite the high contrast and low noise of MR images, this algorithm struggles with low accuracy over the PWML detection task. As far as we know, there is currently no known research team working on automatic segmentation of PWML on US data. This task is highly challenging because of the speckle noise, low contrast and the high acquisition variability. In this paper, we introduce a novel architecture based on the U-Net backbone to perform the detection and segmentation of PWML in cUS images. This model combines a soft attention model focusing on the PWML localisation and the self balancing focal loss (SBFL) introduced by Lin [Liu et al, arxiv, 2019]. The soft attention mask is a 3D probabilistic map derived from spatial prior knowledge of PWML localisation computed from our dataset. Performance of this model is evaluated on a dataset of cUS exams including 21 patients acquired with a Acuson Siemens 4-9 MHZ probe. For each exam, a 3D volume of dimension 360x400x380 was reconstructed with an isotropic spatial resolution of 0.15 mm. A total of 547 lesions were delineated on the images by an expert pediatrician. For this study, we considered 131 lesions with a volume bigger than 1.7 pseudo-formula . Volumes of PWM lesions range from 1.75 pseudo-formula to 61.09 pseudo-formula with a median size of 4 pseudo-formula . The deep model was trained and validated with a 10-fold cross-validation based on approximately 3000 coronal slices extracted from the 3D volumes . We also performed an ablation study to evaluate the impact of the attention gate and the focal loss. Detection performance was assessed at the lesion level, thus meaning that we performed a cluster analysis on the label maps outputted by the network using a 3D connectivity rule to identify the connected components. Compared to the U-Net, the priority U-Net with SBFL increases the recall and the precision in the detection task from 0.4404 to 0.5370 and from 0.3217 to 0.5043, respectively. The Dice metric is also increased from 0.3040 to 0.3839 in the segmentation task. In this study, we proposed the first use case of automated detection of PWML in cUS exams of preterms neonates as well as a novel deep architecture inspired from the attention gated U-Net combined with the self-balancing focal loss. Our results are shown to outperform the standard U-Net for this challenging detection task.""","""The paper tackles a very challenging problem and presents good results. The rebuttal appropriately addresses questions of the reviewers and justifies the methodological choices made. """ 140,"""Beyond Classfication: Whole Slide Tissue Histopathology Analysis By End-To-End Part Learning""",[],"""An emerging technology in cancer care and research is the use of histopathology whole slide images (WSI). Leveraging computation methods to aid in WSI assessment poses unique challenges. WSIs, being extremely high resolution giga-pixel images, cannot be directly processed by convolutional neural networks (CNN) due to huge computational cost. For this reason, state-of-the-art methods for WSI analysis adopt a two-stage approach where the training of a tile encoder is decoupled from the tile aggregation. This results in a trade-off between learning diverse and discriminative features. In contrast, we propose end-to-end part learning (EPL) which is able to learn diverse features while ensuring that learned features are discriminative. Each WSI is modeled as consisting of pseudo-formula groups of tiles with similar features, defined as parts. A loss with respect to the slide label is backpropagated through an integrated CNN model to pseudo-formula input tiles that are used to represent each part. Our experiments show that EPL is capable of clinical grade prediction of prostate and basal cell carcinoma. Further, we show that diverse discriminative features produced by EPL succeeds in multi-label classification of lung cancer architectural subtypes. Beyond classification, our method provides rich information of slides for high quality clinical decision support.""","""All reviewers indicate acceptance and highlight that the method is interesting and solves a relevant problem. The authors show their method's performance on several relevant tasks and compare it to a baseline. As such, I also strongly recommend acceptance.""" 141,"""Uncertainty Evaluation Metrics for Brain Tumour Segmentation""","['Brain Tumour Segmentation', 'Deep Neural Network', 'Uncertainty Evaluation']","""In this paper, we describe and explore the metric that was designed to assess and rank uncertainty measures for the task of brain tumour sub-tissue segmentation in the BraTS 2019 sub-challenge on uncertainty quantification. The metric is designed to (1) reward uncertainty measures where high confidence is assigned to correct assertions, and where incorrect assertions are assigned low confidence and (2) penalize measures that have higher percentages of under-confident correct assertions. Here, the workings of the metrics explored based on a number of popular uncertainty measures evaluated on the BraTS2019 dataset""","""This paper presents a simple yet effective method to evaluate uncertainty applied to tumor segmentation problem. This short paper is well written and the results seem relevant to MIDL. """ 142,"""Siamese Content Loss Networks for Highly Imbalanced Medical Image Segmentation""","['Semantic Segmentation', 'White Matter Hyperintensities', 'Siamese Networks', 'Medical Imaging', 'Magnetic Resonance Imaging', 'Label Imbalance', 'Transfer Learning']","""Automatic segmentation of white matter hyperintensities (WMHs) in magnetic resonance imaging (MRI) remains highly sought after due to the potential to streamline and alleviate clinical workflows. WMHs are small relative to whole acquired volume, which leads to class imbalance issues, and instability during the training process of many deep learning based solutions. To address this, we propose a method which is robust to effects of class imbalance, through incorporating multi-scale information in the training process. Our method consists of training an encoder-decoder neural network utilizing a Siamese network as an auxiliary loss function. These Siamese networks take in pairs of image pairs, input images masked with ground truth labels, and input images masked with predictions, and computes multi-resolution feature vector representations and provides gradient feedback in the form of a L2 norm. We leverage transfer learning in our Siamese network, and present positive results without need to further train. It was found these methods are more robust for training segmentation neural networks and provide greater generalizability. Our method was cross-validated on multi-center data, yielding significant overall agreement with manual annotations. ""","""The paper presents an overall interesting development and work on the loss function to improve segmentation in the case of strong imbalance. Reviewers are consistently praising the quality of the rebuttal and the revised paper has the potential to be a much-improved version of interest to the MIDL community.""" 143,"""Model Averaging and Augmented Inference for Stable Echocardiography Segmentation using 2D ConvNets""","['Convolutional Neural Networks', 'Echocardiography', 'Segmentation', 'Data Augmentation']","""The automatic segmentation of heart substructures in 2D echocardiography images is a goal common to both clinicians and researchers. Convolutional neural networks (CNNs) have recently shown the best average performance. However, on the rare occasions that a trained CNN fails, it can fail spectacularly. To mitigate these errors, in this work we develop and validate two easily implementable schemes for regularizing performance in 2D CNNs: model averaging and augmented inference. Model averaging involves training multiple instances of a CNN with data augmentation over a sampled training set. Augmented inference involves accumulating network output over augmentations of the test image. Using the recently released CAMUS echocardiography dataset, we show significant incremental improvement in outlier performance over the baseline model. These encouraging results must still be validated against independent clinical data.""","""This is a well written paper. But like the reviewers, I lean towards a weak reject as the improvements of the proposed method are quite humble to say the least (c.f. Fig1). Furthermore, clear statistics on the reduction of the number of outliers are missing. This is too bad considering that this was the gaol mentioned in the abstract : "" However, on the rare occasions that a trained CNN fails, it can fail spectacularly. To mitigate these errors, in this work we develop and validate two easily implementable schemes for regularizing performance in 2D CNNs""""" 144,"""Understanding Alzheimer diseases structural connectivity through explainable AI""","['Structural connectome', 'diffusion weighted MRI', 'deep learning', 'saliency maps', 'Alzheimer’s Disease']","""In the following work, we use a modified version of deep BrainNet convolutional neural network (CNN) trained on the diffusion weighted MRI (DW-MRI) tractography connectomes of patients with Alzheimers Disease (AD) and Mild Cognitive Impairment (MCI) to better understand the structural connectomics of that disease. We show that with a relatively simple connectomic BrainNetCNN used to classify brain images and explainable AI techniques, one can underline brain regions and their connectivity involved in AD. Results reveal that the connected regions with high structural differences between groups are those also reported in previous AD literature. Our findings support that deep learning over structural connectomes is a powerful tool to leverage the complex structure within connectomes derived from diffusion MRI tractography. To our knowledge, our contribution is the first explainable AI work applied to structural analysis of a degenerative disease.""","""While the novelty of the paper is limited, I do believe that the novel application of DL to DWI connectivity for classification could spark some interesting discussion at the conference. """