AMSR / conferences_raw /midl19 /MIDL.io_2019_Conference_B1gT9_B81E.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame
No virus
18.7 kB
{"forum": "B1gT9_B81E", "submission_url": "https://openreview.net/forum?id=B1gT9_B81E", "submission_content": {"code of conduct": "I have read and accept the code of conduct.", "paperhash": "gadermayr|unsupervisedly_training_gans_for_segmenting_digital_pathology_with_automatically_generated_annotations", "title": "Unsupervisedly Training GANs for Segmenting Digital Pathology with Automatically Generated Annotations", "abstract": "Recently, generative adversarial networks exhibited excellent performances in semi-supervised image analysis scenarios. In this paper, we go even further by proposing a fully unsupervised approach for segmentation applications with prior knowledge of the objects\u2019 shapes. We propose and investigate different strategies to generate simulated label data and perform image-to-image translation between the image and the label domain using an adversarial model. For experimental\nevaluation, we consider the segmentation of the glomeruli, an application scenario from renal pathology. Experiments provide proof of concept and also confirm that the strategy for creating the simulated label data is of particular relevance considering the stability of GAN trainings.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "authorids": ["michael.gadermayr@fh-salzburg.ac.at", "laxmi.gupta@lfb.rwth-aachen.de", "klinkhammer@ukaachen.de", "boor@ukaachen.de", "dorit.merhof@lfb.rwth-aachen.de"], "authors": ["Michael Gadermayr", "Laxmi Gupta", "Barbara M. Klinkhammer", "Peter Boor", "Dorit Merhof"], "keywords": ["Adversarial Networks", "Histology", "Kidney", "Segmentation", "Unsupervised"], "pdf": "/pdf/2d193bc835392642120bfdc9e93766ebb47ef3b3.pdf", "_bibtex": "@inproceedings{gadermayr:MIDLFull2019a,\ntitle={Unsupervisedly Training {\\{}GAN{\\}}s for Segmenting Digital Pathology with Automatically Generated Annotations},\nauthor={Gadermayr, Michael and Gupta, Laxmi and Klinkhammer, Barbara M. and Boor, Peter and Merhof, Dorit},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=B1gT9_B81E},\nabstract={Recently, generative adversarial networks exhibited excellent performances in semi-supervised image analysis scenarios. In this paper, we go even further by proposing a fully unsupervised approach for segmentation applications with prior knowledge of the objects{\\textquoteright} shapes. We propose and investigate different strategies to generate simulated label data and perform image-to-image translation between the image and the label domain using an adversarial model. For experimental\nevaluation, we consider the segmentation of the glomeruli, an application scenario from renal pathology. Experiments provide proof of concept and also confirm that the strategy for creating the simulated label data is of particular relevance considering the stability of GAN trainings.},\n}"}, "submission_cdate": 1544079509318, "submission_tcdate": 1544079509318, "submission_tmdate": 1561396921215, "submission_ddate": null, "review_id": ["S1gIPIF2XE", "B1l5vNF2QV", "BkeiJXrnmV"], "review_url": ["https://openreview.net/forum?id=B1gT9_B81E&noteId=S1gIPIF2XE", "https://openreview.net/forum?id=B1gT9_B81E&noteId=B1l5vNF2QV", "https://openreview.net/forum?id=B1gT9_B81E&noteId=BkeiJXrnmV"], "review_cdate": [1548682846119, 1548682338111, 1548665571136], "review_tcdate": [1548682846119, 1548682338111, 1548665571136], "review_tmdate": [1548856755833, 1548856755617, 1548856751418], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper2/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper2/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper2/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1gT9_B81E", "B1gT9_B81E", "B1gT9_B81E"], "review_content": [{"pros": "The manuscript is mostly well written.\n\nIt is original in the sense of showing how far one can get with an established cycle-GAN approach and model-based training data to segment simple shaped objects (glomeruli) which differ in number of objects, size and shape from other repeated simple objects (nuclei, tubuli). \n", "cons": "The claim that the method is very robust to the object parameters and that \"the shapes of the simulated objects does not have a major impact on the final segmentation performance\" is not supported by sufficient evidence. The authors should include the statistics of the ground truth annotations and the predicted segmentations when assuming circles and when assuming ellipsoids. Only then readers might be convinced that these claims hold. Doubts stem from the observation that cycle-GAN will synthesize any differences in the distributions to please the discriminator.\n\nThe F1 scores of the shown examples in Fig.4 should be stated, such that readers can judge if these are representative examples or not. Results from ME and MC for the same image and for different stains should be shown to be able to appreciate their performance differences. Contour overlays of the ground truth and predicted glomeruli segmentation on the image will save space and enable readers to better judge segmentation accuracy.\n\nThe mean F1 scores for the supervised method should be included in the text. For claiming \"better performance\" a statistical significance test should be performed.\n\nGadermayr 2017 was used as baseline supervised method. Gadermayr 2017) achieved very good results (F1 0.91 for CN2 method) when trained only on PAS stain on 18 WSIs. How was this method trained for the different stains for this dataset (3 stains, 6 images each) when using 1, 2, 4, 8 WSIs (Fig. 3)? It seems Gadermayr 2018a should be used as baseline supervised method, as it can cope with different stains (Dice 0.81-0.86)?\n\nThe claim that the method can easily be adapted to other applications by changing the model should be tuned down. Most annotation problems would require quite complex size and shape models and might also need an appearance model if similarly sized and shaped objects are present.\n\nMinor:\n\nPlease clarify what is changing in the repeated experiments, e.g. new simulated annotations or different random selection of the same dataset?\n\n...where nuclei cannot be clearly detected (Fig. 4, third column)... Should this refer to second column?\n", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "The submission proposes to combine a simple generative model of segmentation mask with GAN-based image to image translation in order to learn how to segment glomeruli in digital pathology slides. Using information about the distribution of shape, size and number of objects in a segmentation mask, the method can be trained without supervision to achieve performance comparable to a fully supervised method.\n\nAlthough prior knowledge is needed about the distribution of objects, it seems to be enough with rough estimates based on visual inspection. \n\nOverall, the presentation is clear and the experiments nicely illustrate the potential, as well as some limits, of the suggested approach. If the approach generalizes well, this could become a valuable tool in many applications due to the ease of generating synthetic segmentation masks.", "cons": "The method is validated on a relatively small dataset (9 images in total?). It is not clear from this scale that performance estimates are reliable. This should be considered in future work. I would also like to see if there is a difference between dyes.\n\nA problem for GANs is differences in label distribution. If the generated segmentation masks contains significantly less/more glomeruli than the images, it is likely that performance will degrade. The experiment with circle/elipse shape indicates that capturing the exact shape is not very important. I would like to have seen an experiment investigating the importance of estimating the other parameters (shape, number).\nIt is not clear if the visual assessment of parameters is only based on training images or if test images are also included. \nI am missing some information about the workload of the visual assessment. F.ex is it much less than providing rough segmentations by clicking the center of glomeruli?\n\nThe plots on the left in Figure 2 are difficult to view. I suggest you try a different color, linestyle, thickness, ...\n\n", "rating": "4: strong accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "In this paper, the authors use the well-known cycle GAN model to segment WSI (whole slide images) in an unsupervised fashion. They design several annotation models, that mimic the real label images. The framework is evaluated on WSI from renal pathology and against a fully supervised scenario. This paper shows a valuable application of cycle GAN to histological image segmentation.\n\nThe authors have published a paper at MICCAI'18 on the same subject, what is the added value in this submission? To better assess the contribution, it would be also interesting to specify if the cycle GAN has been used in a segmentation setting in other medical imaging cases.\n\nSome questions:\n- is there any preprocessing on the image?\n- \"For each stain individually...\": it is not clear what the authors meant in this sentence.\n- rotation parameter alpha is drawn in [0, 2pi]. Given the symmetry of the elliptic shape, an interval of [0, pi] should be sufficient, or am I missing something?\n- does the color difference between first row of images a,b and c,d account for anything?\n\ntypos: with the a, where used\nFigure 4: caption \"ettings\" and also 2nd row of images: twice (c)", "cons": "The annotation models contain many parameters, which are tuned \"visually\". Influence of these parameters could be assessed. How interesting would it be to implement an iterative process that would alternate between segmentation and parameter update?\n\nI could not find the exact number of patches on which the framework is assessed. In order to fairly compare the proposed approach to fully supervised FCN, is the test set the same in both cases?", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["ByxLxRfkEN", "BkezoTzkNV", "BygKRpGkE4"], "comment_cdate": [1548852718227, 1548852633833, 1548852688601], "comment_tcdate": [1548852718227, 1548852633833, 1548852688601], "comment_tmdate": [1555946050950, 1555946050737, 1555946050518], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper2/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper2/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper2/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Reply-to-reviewer", "comment": "We would like to thank the reviewer for the positive evaluation! In the following, we address the major concerns (C1-C3):\n\nC1: The annotation models contain many parameters, which are tuned \"visually\". Influence of these parameters could be assessed. How interesting would it be to implement an iterative process that would alternate between segmentation and parameter update?\n\nIn future work, we plan to assess the impact of these parameters. Also an iterative approach sounds really interesting! Thanks for this hint! We will think about such a self-learning pipeline.\n\nC2: I could not find the exact number of patches on which the framework is assessed. In order to fairly compare the proposed approach to fully supervised FCN, is the test set the same in both cases?\n\nWe only mentioned 100 patches per WSI. This number will be added accordingly.\n\nC3: Questions:\n 1) is there any preprocessing on the image?\n 2) \"For each stain individually...\": it is not clear what the authors meant in this sentence.\n 3) rotation parameter alpha is drawn in [0, 2pi]. Given the symmetry of the elliptic shape, an interval of [0, pi] should be sufficient, or am I missing something?\n 4) does the color difference between first row of images a,b and c,d account for anything?\n\n1) There is no proprocessing.\n2) \"For each stain individually\": this is a result of a confusion, as we were simulataneously performing experiments (in a different setting) with different stains. The sentense will be changed as there is only one stain in this experiment.\n3) This is also true - however, both definitions are correct.\n4) This difference is due to the encoding. In the two-classes case, we use gray-scale images to represent the labels, while we used color images in the three-classes case. This is only due to a change in representation and does not affect segmentation.\n"}, {"title": "Reply-to-reviewer", "comment": "We would like to thank the reviewer for the positive evaluation! In the following, we address the major concerns (C1-C5):\n\nC1: The authors should include the statistics of the ground truth annotations and the predicted segmentations when assuming circles and when assuming ellipsoids. \n\nWe already thought about such an evaluation. The problem is that the resulting metrics are hard to assess because they cannot be compared with some baseline. Anyway, we will provide metrics to compare at least the difference between fitting an ellipsoid vs. fitting a circle. \n\nC2: The F1 scores of the shown examples in Fig.4 should be stated, such that readers can judge if these are representative examples or not. Results from ME and MC for the same image and for different stains should be shown to be able to appreciate their performance differences. Contour overlays of the ground truth and predicted glomeruli segmentation on the image will save space and enable readers to better judge segmentation accuracy.\n\nThis will be changed accordingly in the revised version of the manuscript.\n\nC3: The mean F1 scores for the supervised method should be included in the text. For claiming \"better performance\" a statistical significance test should be performed.\n\nThis will also be adjusted accordingly.\n\nC4: Gadermayr 2017 was used as baseline supervised method. Gadermayr 2017) achieved very good results (F1 0.91 for CN2 method) when trained only on PAS stain on 18 WSIs. How was this method trained for the different stains for this dataset (3 stains, 6 images each) when using 1, 2, 4, 8 WSIs (Fig. 3)? It seems Gadermayr 2018a should be used as baseline supervised method, as it can cope with different stains (Dice 0.81-0.86)?\n\nThanks for this hint! We actually used the wrong citation here. We the correct citation, that will become clear.\n\nC5: The claim that the method can easily be adapted to other applications by changing the model should be tuned down. Most annotation problems would require quite complex size and shape models and might also need an appearance model if similarly sized and shaped objects are present.\n\nWe agree with the reviewer, that the simulation stage can be quite difficult. If focusing on complex shapes, such a simulation would be highly complex! Certainly for many tasks such an approach is either infeasible or corresponds to complex modelling. Anyway, there are many tasks in (bio)medical image analysis where the objects-of-interest are represented by rather basic e.g. circular shapes. We will modify the sentence to make that more clear.\n"}, {"title": "Reply-to-reviewer", "comment": "We would like to thank the reviewer for the positive evaluation! In the following, we address the major concerns (C1-C4):\n\nC1: The method is validated on a relatively small dataset (9 images in total?). It is not clear from this scale that performance estimates are reliable. This should be considered in future work. I would also like to see if there is a difference between dyes.\n\nEvaluation was performed with a relatively small number of whole slide images. However, from each WSI, 100 patches (500x500 pixels) were extracted resulting in a quite large data set. We agree that further stains should be investigated as well. This is done in future work.\n\nC2: A problem for GANs is differences in label distribution. If the generated segmentation masks contains significantly less/more glomeruli than the images, it is likely that performance will degrade. The experiment with circle/elipse shape indicates that capturing the exact shape is not very important. I would like to have seen an experiment investigating the importance of estimating the other parameters (shape, number).\n\nThis point was also critized by reviewer 1. We will provide a small evaluation of shapes in the revised version (see reviewer 1 comments).\n\nC3: It is not clear if the visual assessment of parameters is only based on training images or if test images are also included. I am missing some information about the workload of the visual assessment. F.ex is it much less than providing rough segmentations by clicking the center of glomeruli?\n\nVisual assessment is only required for some (few, e.g. 5) patches whereas rough annotation would be required for a large data set (e.g. 2000 patches). So we are confident that visual inspection is less time consuming.\n\nC4:The plots on the left in Figure 2 are difficult to view. I suggest you try a different color, linestyle, thickness, ...\n\nThat will be changed accordingly."}], "comment_replyto": ["BkeiJXrnmV", "S1gIPIF2XE", "B1l5vNF2QV"], "comment_url": ["https://openreview.net/forum?id=B1gT9_B81E&noteId=ByxLxRfkEN", "https://openreview.net/forum?id=B1gT9_B81E&noteId=BkezoTzkNV", "https://openreview.net/forum?id=B1gT9_B81E&noteId=BygKRpGkE4"], "meta_review_cdate": 1551356586623, "meta_review_tcdate": 1551356586623, "meta_review_tmdate": 1551881980092, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "It looks like all reviewers suggest to accept this paper. After review this paper and the reply to critique, I agree with the reviewers to accept this paper. Generally, this paper is well written. I recommend the authors incorporate reviewer's comments (e.g., statistical test) int he final version. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1gT9_B81E&noteId=Hye7hMIr8E"], "decision": "Accept"}