{"forum": "S1xg4W-leV", "submission_url": "https://openreview.net/forum?id=S1xg4W-leV", "submission_content": {"title": "Unsupervised Lesion Detection via Image Restoration with a Normative Prior", "authors": ["Suhang You", "Kerem Tezcan", "Xiaoran Chen", "Ender Konukoglu"], "authorids": ["jadenyou1989@gmail.com", "tezcan@vision.ee.ethz.ch", "chenx@vision.ee.ethz.ch", "ender.konukoglu@vision.ee.ethz.ch"], "keywords": [], "abstract": "While human experts excel in and rely on identifying an abnormal structure when assessing a medical scan, without necessarily specifying the type, current unsupervised abnormality detection methods are far from being practical. Recently proposed deep-learning (DL) based methods were initial attempts showing the capabilities of this approach. In this work, we propose an outlier detection method combining image restoration with unsupervised learning based on DL. A normal anatomy prior is learned by training a Gaussian Mixture Variational Auto-Encoder (GMVAE) on images from healthy individuals. This prior is then used in a Maximum-A-Posteriori (MAP) restoration model to detect outliers. Abnormal lesions, not represented in the prior, are removed from the images during restoration to satisfy the prior and the difference between original and restored images form the detection of the method. We evaluated the proposed method on Magnetic Resonance Images (MRI) of patients with brain tumors and compared against previous baselines. Experimental results indicate that the method is capable of detecting lesions in the brain and achieves improvement over the current state of the art.", "pdf": "/pdf/962b92d2bb4fbbac3e29b7ee7c2cd25ada1df436.pdf", "code of conduct": "I have read and accept the code of conduct.", "paperhash": "you|unsupervised_lesion_detection_via_image_restoration_with_a_normative_prior", "_bibtex": "@inproceedings{you:MIDLFull2019a,\ntitle={Unsupervised Lesion Detection via Image Restoration with a Normative Prior},\nauthor={You, Suhang and Tezcan, Kerem and Chen, Xiaoran and Konukoglu, Ender},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=S1xg4W-leV},\nabstract={While human experts excel in and rely on identifying an abnormal structure when assessing a medical scan, without necessarily specifying the type, current unsupervised abnormality detection methods are far from being practical. Recently proposed deep-learning (DL) based methods were initial attempts showing the capabilities of this approach. In this work, we propose an outlier detection method combining image restoration with unsupervised learning based on DL. A normal anatomy prior is learned by training a Gaussian Mixture Variational Auto-Encoder (GMVAE) on images from healthy individuals. This prior is then used in a Maximum-A-Posteriori (MAP) restoration model to detect outliers. Abnormal lesions, not represented in the prior, are removed from the images during restoration to satisfy the prior and the difference between original and restored images form the detection of the method. We evaluated the proposed method on Magnetic Resonance Images (MRI) of patients with brain tumors and compared against previous baselines. Experimental results indicate that the method is capable of detecting lesions in the brain and achieves improvement over the current state of the art.},\n}"}, "submission_cdate": 1544716583615, "submission_tcdate": 1544716583615, "submission_tmdate": 1561397808207, "submission_ddate": null, "review_id": ["BJxlnypimV", "rklI6F7c7V", "B1l8-ho_X4"], "review_url": ["https://openreview.net/forum?id=S1xg4W-leV¬eId=BJxlnypimV", "https://openreview.net/forum?id=S1xg4W-leV¬eId=rklI6F7c7V", "https://openreview.net/forum?id=S1xg4W-leV¬eId=B1l8-ho_X4"], "review_cdate": [1548631976092, 1548528061795, 1548430334015], "review_tcdate": [1548631976092, 1548528061795, 1548430334015], "review_tmdate": [1549872841975, 1548856735527, 1548856729720], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper57/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper57/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper57/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["S1xg4W-leV", "S1xg4W-leV", "S1xg4W-leV"], "review_content": [{"pros": "The paper introduces a novel unsupervised method for lesion detection based on Normative prior. The paper is well-written, and the method is validated on a publicly available database showing an improvement over the state-of-the-art. \n", "cons": "While the proposed approach is quite interesting, the experiments do not rather validate the novel contributions. For instance, I was expecting to see the following experiments: \n\n1. A comparison with spatial VAE (Baur et al. 2018) which is similar to the proposed method, however, with a single multivariate Gaussian mixture --> To validate the need of modeling the latent code as a mixture of Gaussians.\n2. A comparison of GMVAE (w/o Image restoration) vs. GMVAE(TV) --> To validate the need for Image Restoration. For instance, n = 0 vs. n = 500 steps as reported in the paper. \n\nFurther, I was expecting a section on the sensitivity analysis showing the following: \n\n1. the influence of the number of mixtures \n2. the influence of the number of steps in the image restoration (accuracy vs. time complexity) \n\n\nApart from that, here are some questions/comments: \n1. The network p(c|z,w) wasn't reported in Appendix A, so I was wondering whether it was implemented or not. Any observations, regarding the last term in Eq.2, similar to what reported in Dilokthanakul et al. 2016? \n2. if Eq.2 is converged, then can't we detect outliers from p(c|z,w)? For instance, outlier pixels (regions) would have lower probabilities in all mixtures and should be easily detected. \n3. Can't we use the MR distribution, i.e., WM, GM, CSF, and background as p(c)? \n4. M in Eq.7 is not defined. ", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "special_issue": ["Special Issue Recommendation"], "oral_presentation": ["Consider for oral presentation"]}, {"pros": "The paper addresses the problem of brain tumour segmentation from an unsupervised viewpoint which is a very useful approach in medical imaging where data annotation is expansive. The method segments tumours as outliers from a learned representation of healthy images. The tumour detection is done by solving a MAP problem, where the prior distribution of data was approximated using Gaussian Mixture Variational Autoencoder (GMVAE). The data consistency term is optimized by using Total Variation norm. \n\nThe paper is well written and clear. A nice summary of VAE and GMVAE is presented followed by the description of the contribution. \n\nResults are promising. The method is compared with few other deep learning based unsupervised methods and achieves good performance. ", "cons": "The majority of the method was proposed by an earlier paper from the same group [Tezcan et al. 2017]. This paper applied that method to the new context of brain tumor lesion segmentation with small modifications due to different task. \nThis group also had a similar paper in MIDL 2018 where they applied slightly different methods (VAE, AAE, as opposed to the GMVAE in this paper) on the same problem: https://openreview.net/forum?id=H1nGLZ2oG\nThis makes the contribution rather incremental. \n\nWhile experiments are good for comparing the method with other similar unsupervised methods, it is not shown how the method compares with state of the art on this competition data. This would help determining if this is practically a very useful approach. \n\nThe experiments also lack some details which made it hard to understand:\n- Description of DSC-AUC wasn\u2019t clear.\n- Two of the baseline models VAE-256, VAE-128 weren\u2019t described.\n- In the histogram equalization part, a subject was randomly chosen from CamCANT2 dataset as the reference. It was not shown in the paper whether it\u2019s a sensitive parameter. Thus a potential issue here is that not knowing which specific subject was chosen might make it hard to reproduce the result.\n\nTypos:\n-Paper 2 top: \u201cpatients making them attractive\u201d, -> \u201cpatients, making them attractive\u201d\n-Page 8: in conclusion, line4, DCSs -> DSCs\n\n", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "The authors propose a novel unsupervised anomaly detection and segmentation approach that utilizes Gaussian mixture variational auto-encoder (GMVAE) to learn the prior distribution on healthy subject images. The images containing anomalies are restored using the learned prior, incorporating total variation for data consistency, and the residuals are computed by subtracting the restored from the original image. By thresholding the residuals the pixel-wise anomaly detection map is obtained. The proposed method was trained on 652 images of healthy subjects and the anomaly detection approach applied to BRATS 2017 challenge datasets containing brain tumors as the anomaly. \n\n- The unsupervised approach is well motivated and literature review is extensive\n- The reconstruction methodology incorporating the normative prior seems novel and, in general, is described clearly and concisely ", "cons": "- Method was applied to 2D image slices and does not take full 3D information into account\n\n- Results in Table 1, for instance the DSC_AUC, for brain tumor detection are rather poor compared to the equivalent Dice_WT score for most other tested methods (see https://www.cbica.upenn.edu/BraTS17/lboardValidation.html)\n\n- Some validation metrics are not defined, for instance, DSC_AUC is not defined in section 2.4, but which, presumably, is obtained by maximizing the TPR-FPR value; please clarify\n\n- There are two approaches to residual map computation, but results are not reported consistently; namely, the signed difference based residual map calculation is proposed ad hoc at the end of results section 4.2, indicating much improved results, while unfortunately the results are not reported in the same manner as before", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct", "oral_presentation": ["Consider for oral presentation"]}], "comment_id": ["S1e1U5n644", "rkggmq2a4V", "HkgQIVhaNN", "BkenkV3T4V", "ByeBcCoAVN", "SylYnRo04E"], "comment_cdate": [1549810247187, 1549810199725, 1549808715280, 1549808612209, 1549872781502, 1549872817127], "comment_tcdate": [1549810247187, 1549810199725, 1549808715280, 1549808612209, 1549872781502, 1549872817127], "comment_tmdate": [1555945982886, 1555945982622, 1555945982363, 1555945982140, 1555945970427, 1555945970210], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper57/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper57/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper57/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper57/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper57/AnonReviewer2", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper57/AnonReviewer2", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Response to AnonReviewer2: 1) additional experiments for method comparison", "comment": "We thank the reviewer for the suggestions and questions, due to reply space limitation, we shall address the concerns in two sections: 1) additional experiments for method comparsion; 2) sensitivity analysis, additional questions and comments.\n\n>> The reviewer asks for additional experimental analysis to characterize the proposed method better. While we agree with the reviewer, we also note that the submission is a conference article with space limitations. We had to prioritize in this preliminary work; we introduced the method and compared with other recently published results on the same dataset. Indeed, adding all the analysis the reviewer mentions would have resulted in a longer manuscript. That said, here we present results from the further experimental analysis that we believe addresses the concerns of the reviewer. We note that all the additional analyses are in support of the proposed method. If accepted, we will add the important results to the article prior to the publication.\n> Comparison to spatialVAE: We believe the first concern has two parts. First part questions the value of using mixture of Gaussians compared to a single one. Based on the reviewer\u2019s suggestion, we used a single Gaussian in the prior model while keeping the network structure and size of the latent space fixed. The results for this model are\nDSC_AUC | AUC | FPR | FNR | DSC1 | DSC5 | DSC10 \t \n0.165+-0.106 0.710 0.329 0.350 0.044+-0.061 0.176+-0.174 0.210+-0.179 \nThe detection accuracies are lower than mixture of Gaussians, which was expected. Upon acceptance, we will add this result to the article for completion. \n\n> Comparison with Baur et al.: The second part requests comparisons with Baur et al. We actually experimented with this method prior to the submission, however, due to non-optimal results we refrained from reporting. \nThe original article did not provide results on the same dataset we used, they used MS lesions as the example application, hence we could not directly report the values there. Furthermore, to the best of our knowledge, code for this paper is not publicly available. Therefore, we implemented it from scratch to the best of our ability, trying to reproduce the method as much as possible by filling in the technical details not provided in the mentioned article. We constructed an encoder/decoder structure that used images of size 256x256, the same size as reported in Baur et al., and had the same latent space structure 16x16x64, as reported in the paper. We used 4 downsampling residual blocks for the encoder and 4 upsampling residual blocks for the decoder. For the discriminator, we used the same model that was used in AnoGAN. We optimized the networks with the loss function presented in Baur et al., however, since exact lambda weight values were not provided, we experimented with different alternatives. Despite our effort, the best results we achieved on the BraTS dataset were AUC=0.59, DSC1=0.08 and DSC5=0.13. As these results were not par with the proposed model nor the alternatives and authors did not report any experimental results on BraTS dataset, we refrained from reporting the results for fairness, and decided to reach out to the authors for the follow-up journal article. Instead, we compare the proposed method with various different recently published methods on the same dataset. \nIn summary, we believe we put quite an effort to make the mentioned article work but were not able to achieve good results, yet. We also would like to note that we implemented AnoGAN from scratch, which was originally presented for OCT images, and were able to make it work on the BraTS dataset. \n\n>> Comparison to GMVAE without image restoration: We did not provide experimental results comparing with GMVAE without restoration because the differences were huge and we believed the reasons were obvious. For completeness, we provide the accuracy results when GMVAE is used without follow-up restoration below\nDSC_AUC | AUC | FPR | FNR | DSC1 | DSC5 | DSC10 \n0.125+-0.070 0.477 0.766 0.225 0.001+-0.003 0.004+-0.006 0.010 +- 0.010 \nWithout image restoration, detection performance dropped substantially. The prior model was able to reconstruct the abnormal pixels while assigning higher variance to them. As a result, reconstruction error of the abnormal pixels were similar to the others. Restoration method was able to address these issues. We note that these observations also formed out initial motivation to build the restoration method. "}, {"title": "Response to AnonReviewer2: 2) sensitivity analysis, additional questions and comments", "comment": ">> Sensitivity analysis: \n> Regarding the influence of the number of mixture components, we experimented using c=3, c = 9, c=12, and report respective AUC, DSC1, DSC5, DSC10 and DSC_AUC below,\n\t |DSC_AUC | AUC | FPR | FNR | DSC1 | DSC5 | DSC10 \nc=3 0.273+-0.157 0.783 0.201 0.346 0.042+-0.067 0.259+-0.191 0.315+-0.194\nc=9 0.352+-0.177 0.818 0.118 0.353 0.181+-0.194 0.436+-0.225 0.410+-0.200\nc=12 0.297+-0.166 0.769 0.138 0.427 0.078+-0.133 0.370+-0.220 0.327+-0.183\n\nDue to time limit, we only experimented with c=6 in the original submission. The results suggest that the performance may change with different mixture components, however, the value of the GMVAE holds for the difference c we experimented with. While this sensitivity analysis is definitely useful, we note that the results fully support the proposed method. We are happy to add this result to the article if accepted.\n> time complexity analysis, below we provide detection accuracies for different steps. We experimented with c=6, the same value used in the original submission.\nStep | DSC_AUC | AUC | FPR | FNR | DSC1 | DSC5 | DSC10 \ni=50 0.194+-0.119 0.668 0.241 0.502 0.002+-0.006 0.208+-0.156 0.205+-0.131\ni=100 0.245+-0.143 0.714 0.164 0.494 0.033+-0.070 0.281+-0.169 0.227+-0.131 \ni=150 0.295+-0.168 0.740 0.127 0.471 0.032+-0.081 0.344+-0.209 0.286+-0.164\ni=200 0.298+-0.172 0.737 0.120 0.479 0.041+-0.095 0.346+-0.212 0.287+-0.166\ni=300 0.299+-0.174 0.733 0.116 0.485 0.054+-0.111 0.347+-0.213 0.286+-0.166 \ni=400 0.300+-0.174 0.732 0.113 0.490 0.060+-0.120 0.347+-0.213 0.284+-0.165\ni=500 0.301+-0.175 0.732 0.113 0.490 0.064+-0.126 0.347+-0.213 0.284+-0.165\nWe see that the detection performance is getting better until 150 and then converges.\n\n>> Modelling of p(c|z,w), p(c|z,w) is explicitly calculated in the following way: with a sampled w_0 from the mean and standard deviation of w, we pass w_0 through a 2-layer convolutional network (kernel size of 1x1, relu activation for hidden layer, identity activation for output layer) to output the mean and standard deviation of p(z|w), then for each dimension of z_0, we compute the probability of the sampled z_0 given the Gaussian distribution parametrized by mean and standard deviation of p(z|w), as each dimension of z_0 is assigned to c clusters and the probability p_i being assigned to each cluster, \\sum_i^c {p_i} =1, we compute the softmax of the probability and then obtained p(c|z,w). We implemented GMVAE in tensorflow similar to the version in the github repository (branch version2) by Nat Dilokthanakul et al. \n\n>> Regarding detection with p(c|w,z), that is a very interesting question and it was our initial intuition as well. Our initial experiments with VAE\u2019s shows, unfortunately, that this does not hold. Outliers still achieve high probability in the latent space. However, we agree with the reviewer of the potential of this avenue and we will explore this further. \n\n>> Using MR tissue probabilities for p(c) is also a very good suggestion and is worth exploring further. Here, we assumed no predetermined labels or knowledge of possible number/type of labels in our task and p(c) is a uniform distribution. We believe that adding more knowledge in the system would improve detection accuracy\n\n>> Explanation of notations: Thanks for the suggestion. In Eq. 7, M is defined as the number of subjects in {Y_s}. For this step of the experiment, we chose the rest 52 subjects which were not used during training from the CamCANT2 dataset."}, {"title": "Response to AnonReviewer3", "comment": "We thank the reviewer for suggestions and questions. \nFirstly, the reviewer raises the issue of not using full 3D information. In our case the main limitation is the computational memory; the method\u2019s principles are not limited to 2D. We believe a dedicated study focusing on the differences between 2D and 3D network-based approaches would be very useful, where emphasis may be on 3D convolutional networks that have lower memory requirements. This is somewhat outside the scope of the article.\n\nSecondly, the dice scores provided in the BraTS challenge are for supervised methods. They are not directly comparable to unsupervised detection results, which is the class our proposed method belongs in. Supervised segmentation requires dense labels, while unsupervised detection does not need this information. The unsupervised detection problem has been much more challenging; the dice scores for all methods are lower. Therefore, we believe this issue is not directly related to the specific method proposed here. We believe the proposed method here advances the state-of-art in unsupervised detection. That said, we also acknowledge the value of providing state-of-the-art segmentation results for comparison, to illustrate the gap between supervised and unsupervised approaches. If accepted, we will add a line in the respective table to provide these values. \n\nThirdly, the DSC_AUC scores are calculated as the reviewer described. They provide optimistic scores and therefore, are not used to assess the method. DSC1, DSC5 and DSC10 are much more important metrics. \n\nLastly, we apologize for the confusion to which the last paragraph led. For the outlier detection, the proposed method uses unsigned residual maps. We provided the results for signed residual maps just to demonstrate the amount of improvement even a tiny bit of information on the abnormality yields, in this case the fact that the abnormality leads to hyperintense regions in T2_w MRI. \n\nThat said, out of curiosity, we ran the experiments for the different methods for signed residual maps. The results from the experiments that we could finish in time are as below\n\n|| Methods || AUC || DSC1 || DSC5 || DSC10 ||\n GMVAE 0.888 0.345 0.362 0.321 \n VAE-128 0.892 0.072 0.227 0.286\n AAE-128 0.800 0.027 0.205 0.260\n AnoGAN 0.758 0.059 0.228 0.227\n\nIf the article is accepted, we propose to move the section as well as this table to the appendix as these results may lead to further confusion for the readers.\n"}, {"title": "Response to AnonReviewer1", "comment": "We thank the reviewer for suggestions and questions. We would correct the typos in the updated version of our paper. We hereby address the concerns as the following:\n1. Novelty and contribution: This paper is a result of our continuing effort in integrating learned priors to solve difficult analysis tasks. It is based on the previous work of \u201cMR image reconstruction using deep density priors\u201d (Tezcan et al, 2017), but it is a a novel application that requires some methodological modifications , specifically the reconstruction is based on L1 similarity here, and application-specific analyses. Compared to our work on anomaly detection of last year, here we took a different approach by introducing image restoration after image reconstruction. In our previous submission, we performed anomaly detection on the residual map obtained with the original image and the reconstructed image, which exhibited high false positives rates in images of higher resolution. The method proposed here addresses this issue to some extent and yields higher AUC and DICE scores as shown in experimental results. \n\n2. Performance gap with reported BraTS challenge baselines: The baseline the reviewer mentions is for supervised segmentation. Naturally, the segmentation accuracy in that scenario is much higher; the Dice score is around 0.90. Here our aim is unsupervised detection. While a direct comparison is not possible, we agree with the reviewer that presenting the performance of supervised segmentation is helpful to understand the disparity between state-of-the-art unsupervised detection and supervised segmentation. We will add this line to the table in the final version of the article if it gets accepted. \n\n3. Calculation of DSC-AUC: we obtain the threshold value from the ROC, as the value that provides the maximum (tpr-fpr) difference, and use it to threshold the residual maps to compute a dice score. This yields an optimistic score and not used to assess the model. \n\n4. VAE-256, is a VAE model with input images of size 256x256, VAE-128, is a VAE model with input image size of 128x128, images of different sizes are obtained by resizing the original images. Both of these models have been evaluated on the same dataset, as the one used in this article, in previous studies. Due to space restrictions we could only provide a reference where these models are explained in depth. \n\n5. Details of histogram equalization: We agree that this could have been an issue. However, before this decision, we assessed the histogram differences in the CamCANT2 dataset. These differences were minimal, therefore, the choice of reference does not have a significant effect on the performance. To support reproducibility, we verified the ID of the randomly chosen subject and provide it here: sub-CC110033. "}, {"title": "Great efforts! ", "comment": "I would like to thank the authors for running a few more experiments to validate their contributions! After reading your response and the results you shared with us, I'm quite happy to change my rating from reject to accept with a recommendation of being an oral presentation as well. Great job! "}, {"title": "Great efforts! ", "comment": "I would like to thank the authors for running a few more experiments to validate their contributions! After reading your response and the results you shared with us, I'm quite happy to change my rating from reject to accept with a recommendation of being an oral presentation as well. Great job! "}], "comment_replyto": ["BJxlnypimV", "BJxlnypimV", "B1l8-ho_X4", "rklI6F7c7V", "S1e1U5n644", "rkggmq2a4V"], "comment_url": ["https://openreview.net/forum?id=S1xg4W-leV¬eId=S1e1U5n644", "https://openreview.net/forum?id=S1xg4W-leV¬eId=rkggmq2a4V", "https://openreview.net/forum?id=S1xg4W-leV¬eId=HkgQIVhaNN", "https://openreview.net/forum?id=S1xg4W-leV¬eId=BkenkV3T4V", "https://openreview.net/forum?id=S1xg4W-leV¬eId=ByeBcCoAVN", "https://openreview.net/forum?id=S1xg4W-leV¬eId=SylYnRo04E"], "meta_review_cdate": 1551356591049, "meta_review_tcdate": 1551356591049, "meta_review_tmdate": 1551881977154, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "The discussion period and authors' efforts to clarify questions have led to a full acceptance agreement among the three reviewers. The paper contribution builds on prior work but proposes new technical elements have clear value. The experimental validation of the unsupervised method on the common BRATS public database will set a precedent in the community.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=S1xg4W-leV¬eId=BklP3M8B84"], "decision": "Accept"}