AMSR / conferences_raw /midl19 /MIDL.io_2019_Conference_H1xkWv8gx4.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame
No virus
22.3 kB
{"forum": "H1xkWv8gx4", "submission_url": "https://openreview.net/forum?id=H1xkWv8gx4", "submission_content": {"title": "Weakly Supervised Deep Nuclei Segmentation using Points Annotation in Histopathology Images", "authors": ["Hui Qu", "Pengxiang Wu", "Qiaoying Huang", "Jingru Yi", "Gregory M. Riedlinger", "Subhajyoti De", "Dimitris N. Metaxas"], "authorids": ["hui.qu@cs.rutgers.edu", "pw241@cs.rutgers.edu", "qh55@cs.rutgers.edu", "jy486@cs.rutgers.edu", "gr338@cinj.rutgers.edu", "sd948@cinj.rutgers.edu", "dnm@cs.rutgers.edu"], "keywords": ["Nuclei segmentation", "weak supervision", "deep learning", "Voronoi diagram", "conditional random \ffield"], "abstract": "Nuclei segmentation is a fundamental task in histopathological image analysis. Typically, such segmentation tasks require significant effort to manually generate pixel-wise annotations for fully supervised training. To alleviate the manual effort, in this paper we propose a novel approach using points only annotation. Two types of coarse labels with complementary information are derived from the points annotation, and are then utilized to train a deep neural network. The fully-connected conditional random field loss is utilized to further refine the model without introducing extra computational complexity during inference. Experimental results on two nuclei segmentation datasets reveal that the proposed method is able to achieve competitive performance compared to the fully supervised counterpart and the state-of-the art methods while requiring significantly less annotation effort. Our code is publicly available.", "pdf": "/pdf/f27e9ec3ae8c182b7501331af8d6889e629d24ec.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "qu|weakly_supervised_deep_nuclei_segmentation_using_points_annotation_in_histopathology_images", "_bibtex": "@inproceedings{qu:MIDLFull2019a,\ntitle={Weakly Supervised Deep Nuclei Segmentation using Points Annotation in Histopathology Images},\nauthor={Qu, Hui and Wu, Pengxiang and Huang, Qiaoying and Yi, Jingru and Riedlinger, Gregory M. and De, Subhajyoti and Metaxas, Dimitris N.},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=H1xkWv8gx4},\nabstract={Nuclei segmentation is a fundamental task in histopathological image analysis. Typically, such segmentation tasks require significant effort to manually generate pixel-wise annotations for fully supervised training. To alleviate the manual effort, in this paper we propose a novel approach using points only annotation. Two types of coarse labels with complementary information are derived from the points annotation, and are then utilized to train a deep neural network. The fully-connected conditional random field loss is utilized to further refine the model without introducing extra computational complexity during inference. Experimental results on two nuclei segmentation datasets reveal that the proposed method is able to achieve competitive performance compared to the fully supervised counterpart and the state-of-the art methods while requiring significantly less annotation effort. Our code is publicly available.},\n}"}, "submission_cdate": 1544738551210, "submission_tcdate": 1544738551210, "submission_tmdate": 1561398694980, "submission_ddate": null, "review_id": ["rygFXzWL7V", "SklS9e_kXE", "S1lRhyyhGE"], "review_url": ["https://openreview.net/forum?id=H1xkWv8gx4&noteId=rygFXzWL7V", "https://openreview.net/forum?id=H1xkWv8gx4&noteId=SklS9e_kXE", "https://openreview.net/forum?id=H1xkWv8gx4&noteId=S1lRhyyhGE"], "review_cdate": [1548255777173, 1547825292839, 1547591606077], "review_tcdate": [1548255777173, 1547825292839, 1547591606077], "review_tmdate": [1548856721545, 1548856711077, 1548856707461], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper108/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper108/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper108/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["H1xkWv8gx4", "H1xkWv8gx4", "H1xkWv8gx4"], "review_content": [{"pros": "This paper proposes to train a nuclei segmentation network using pixel-level labels generated from points annotation by Voronoi diagram and k-means clustering. A dense CRF is trained on top of the network in an end-to-end manner to refine the segmentation model. The authors evaluate their methods on two datasets, Lung Cancer dataset (40 images/8 cases) and MultiOrgan dataset (30 images/7 organs). This paper is well-organized and easy to follow. The topic of learning from weakly annotated histology images is highly relevant for the community.", "cons": "- The proposed method uses (dilated) Voronoi edges as the 'background' label, which effectively depends on two assumptions: 1) the neighbouring nuclei have similar size and are non-toughing, 2) the point annotation is located at the centre of each nuclear. I feel these are also limitations of the proposed method. Simply saying 'point annotation' is therefore inaccurate and a bit of misleading here. In fact, the proposed method only allows one point per nuclear, and in the experiments the manual point annotations are simulated by computing the central point from the ground truth full masks.\n\n- Method of 'Full' + CRF should be included in the analysis, to show the possible 'upper bounds' of the segmentation performance.\n\n- The proposed training label extraction methods should be compared with the those evaluated by Kost et al. (2017), Training nuclei detection algorithms with simple annotations.", "rating": "2: reject", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "The paper presents a method to perform nuclei segmentation based on point annotations. The authors evaluate weakly supervised methods (with and without CRF loss) on two nuclei segmentation datasets and compare the performance with fully supervised and other state-of-the-art methods. The paper is well organized and clear, has a well-defined objective and shows experimental evaluation using segmentation metrics e.g. F1 score, Dice coefficient, AJI. The outcomes of the analysis are promising, as the segmentation achieved using weakly supervised learning is comparable to fully supervised counterparts and other investigated methods.", "cons": "The following concerns need to be addressed by the authors:\n-In initial stages, Voronoi diagrams extract the rough positions of cells and k-means clustering extracts the rough boundaries. From the k-means results, it seems the results do not provide strong priors for accurately segmenting the nuclei boundaries. Both Voronoi centers and clusters appear to be weak shape descriptors as boundary information is not fully preserved. The authors may want to explain the limitations of this type of annotation and discuss why the AJI values are observed lower as compared to other evaluation metrics. Why is the highest AJI achieved for Weak/Voronoi method?\n-How is the 'ignored class' represented in the training set (0/1)? It is shown in Fig 2 but not in the results in Fig 3 and Fig 4. \n-The caption of Fig 1 could be made more clear to indicate the figure contents.\n-In Table 1, the best results could be highlighted for better readability.\n\n", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "This paper attempt to do nuclei segmentation in a weakly supervised fashion, using point annotations. \n\nThe paper is very well written and easy to follow ; figure 1 does an excellent job at summarizing the method. The idea is to generate two labels maps from the points: a Voronoi partitioning for the first one, and a clustering between foreground, background and neutral classes for the second. Those maps are used for training with a partial cross-entropy. The trained network is then fine tuned with a direct CRF loss, as in Tang et al. \nEvaluation is performed on two datasets in several configurations (with and without CRF loss, and variation on the labels used) ; showing the effects of the different parts of the method. The best combination (both labels + CRF) are close or on par with full supervision. \nThe authors also compare the annotation time between points, bounding boxes and full supervision, which really highlight the impact of their method (x10 speedup).\n\nFew questions:\n- Since the method is quite simple and elegant, I expect it could be adapted to other tasks. Do you have any ideas in mind ?\n- How resilient is the method to \"forgotten\" nuclei ; i.e. nucleus without a point in the labels ? Could it be extended to work with only a fraction of the nuclei annotated ? \n- Is using a pre-trained network really helping ? Since there is so much dissimilarity between ImageNet and the target domains, I expect it to be mostly a glorified edge detector. It is improving the final performances, speeding up convergence, both ?", "cons": "Minor improvements for the camera ready version, in no particular order:\n\nTang et al. 2018 was actually published at ECCV 2018, the bibliographic entry should be updated. \n\nSection 2.3 should make the differences (if any) with Tang et al. explicit.\n\nThose three papers should be included in the state-of-the-art section:\n- Constrained convolutional neural networks for weakly supervised segmentation, Pathak et al., ICCV 2015 \n- DeepCut: Object Segmentation from Bounding Box Annotations using Convolutional Neural Networks, Rajchl et al., TMI, 2016\n- Constrained-CNN losses for weakly supervised segmentation, Kervadec et al., MIDL 2018\n\nSince the AJI and object-level Dice are not standard and introduced in other papers, it would be easier to put their formulation back in the paper, so the reader does not have to go look for it.\n\nReplacing (a), (b), ... by Image, ground truth, ... in figures 2, 3, and 4 would improve readability.\n\n\n\n", "rating": "4: strong accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "oral_presentation": ["Consider for oral presentation"]}], "comment_id": ["BylpshRWNV", "S1gJMy0XEV", "H1eUAeCQEN", "Skea8XAQVV"], "comment_cdate": [1549032613414, 1549160198641, 1549160653922, 1549161301268], "comment_tcdate": [1549032613414, 1549160198641, 1549160653922, 1549161301268], "comment_tmdate": [1555946044173, 1555946041866, 1555946041609, 1555946041391], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper108/Area_Chair1", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper108/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper108/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper108/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "discussion", "comment": "Dear Paper108 authors, \n\nPlease submit your rebuttal to the reviewers' comments. It will be greatly helpful for our decision if enough discussions between authors and reviewers can be made within the period. For inconsistent reviewers' comments like what this paper got, the discussion will be much more important. Thank you. "}, {"title": "Response to Reviewer1", "comment": "Thank you for the effort in reviewing our paper. We clarify the issues as follows:\n\nQ1: The use of dilated Voronoi edges as background pixels depends on two assumptions:\na) the neighboring nuclei have similar size and are non-touching.\nb) the point annotation is located at the center of each nucleus.\n\nA1: Both the Voronoi labels and cluster labels are proxy labels generated from the point labels, aiming at obtaining as much information about the nuclei and background as possible. They are not ground-truth and we didn\u2019t claim that the Voronoi edges are the exact background pixels.\n\nRegarding assumption a), there may be errors when a small nucleus and a large nucleus are close to each other, or when two nuclei are touching. However, the error rate is low and treating Voronoi edges as background pixels helps us train the neural network. To make it clear, we compute the error rates (the number of nuclei pixels that locate on the Voronoi edges / the number of all Voronoi edge pixels) on the two datasets used in the paper, i.e., 2.45 % on the Lung Cancer dataset and 7.02% on the MultiOrgan dataset. Besides, we explain in Section 2.1.2 that although the pixels on the Voronoi edge between two touching nuclei may not necessarily be background, the edges are still helpful in guiding the network to separate the nuclei. \n\nRegarding assumption b), we indeed conducted experiments with a similar assumption (point is located at the center of each nuclear bounding box), because we obtained the points annotation from bounding boxes of full masks. However, we expect that our method will work if the points are close to the centers. To validate this, we conduct additional experiments in which we randomly sample the point for each nucleus in its bounding box using a truncated 2D Gaussian distribution (zero probabilities outside the bounding box). The Gaussian distribution is centered at the bounding box\u2019s center and has a standard deviation of 3 pixels, which is not hard to achieve for a pathologist during the annotation (The average areas of nuclei in two datasets are 185 and 397 pixels, respectively, corresponding to radiuses of 7.7 and 11.2 if treating nuclei as circles). The results are as follows:\nDataset\t\t Acc\t F1\t Dice\tAJI\nLC\tCenter points\t0.9433 \t0.8120 \t0.8002 \t0.6503\n\tRandom points\t0.9408\t0.8069\t0.7920\t0.6350\nMO\tCenter points\t0.9071 \t0.7776 \t0.7270 \t0.5097\n\tRandom points\t0.9081\t0.7605\t0.7093\t0.4949\nThe performance using random points is very close to that of center points.\n\nQ2: The method only allows one point per nucleus and the points are computed from full masks.\nA2: One point is enough to mark the location of each nucleus, which is efficient for a pathologist to generate the annotations. We generated points from full masks for convenience. As presented in the answer of Q1, our method can work well even the points are not in the center.\n\nQ3: Add experiments about \u2018Full\u2019 + CRF.\nA3: In this task, the main bottleneck that affects the performance of fully-supervised method is the false positive, not the boundary accuracy. Therefore, CRF is not so effective as in weakly-supervised setting. Here are the additional experimental results on \u2018Full\u2019 + CRF:\nDataset\t\t Acc\t F1\t Dice\tAJI\nLC\tFull 0.9615\t0.8771\t0.8521\t0.6979\n\tFull + CRF\t0.9626\t0.8784\t0.8526\t0.7029\nMO\tFull\t 0.9194 \t0.8100 \t0.6763 \t0.3919\n\tFull + CRF\t0.9202\t0.8112\t0.6841\t0.4029\n\nQ4: Did not compare the proposed train label extraction methods with those evaluated in Kost et al., 2017.\nA4: Although we use similar Voronoi labels as in Kost et al. (2017), we didn\u2019t compare our method with those in Kost et al. (2017) because the settings are different. \na) The task in Kost et al. (2017) is nuclei detection and only detection performance was reported. While our task is nuclei segmentation using weak labels. Our task is harder and we used another type of labels.\nb) In Kost et al. (2017), the Voronoi-based training sample extraction aims to find the locations of nuclei and non-nuclei pixels and is used for training sample selection before the nuclei detection. In our method, however, the Voronoi label not only contains nuclei and non-nuclei pixels, but also has an ignored class. It is used as a better label than the point label to guide the training of deep neural networks for segmentation.\nIn the camera-ready version, we will add Kost et al. (2017) to the related work and describe the differences between their and our methods.\n\n\n"}, {"title": "Response to Reviewer3", "comment": "Thank you for the effort in reviewing our paper. We clarify the issues as follows:\n\nQ1: The Voronoi centers and clusters are weak shape descriptors as boundary information is not fully preserved.\nA1: Yes, the two types of labels lack the boundary information, which is a key factor that affects the performance of weakly supervised methods. The neural network can find correct boundaries for most nuclei. But for nuclei with non-uniform colors, it is hard to predict correct boundaries. Therefore, we take advantage of CRF to refine the results. Ideally, if we can find accurate boundary information from weak annotations, the performance should be very close to fully-supervised ones. \n\nQ2: Why AJI values are lower as compared to other evaluation metrics?\nA2: AJI is proposed by Kumar et al. (TMI2017) to measure the segmentation performance for nuclei. It is lower because it adds the false positive and false negative objects to the denominator. Detailed analysis can be found in Kumar et al.\u2019s paper:\nNeeraj Kumar, Ruchika Verma, Sanuj Sharma, Surabhi Bhargava, Abhishek Vahadane, and Amit Sethi. A dataset and a technique for generalized nuclear segmentation for computational pathology. IEEE transactions on medical imaging, 36(7):1550\u20131560, 2017.\n\nQ3: Why is the highest AJI achieved for Weak/Voronoi method?\nA3: You might misread the results in table 1. The highest AJI for Lung Cancer dataset is achieved for \u2018Full\u2019 and for MultiOrgan dataset it is achieved for the DIST method (Naylor et al., 2017).\n\nQ4: How is the \u2018ignored class\u2019 represented in the training set (0/1)? It is shown in Fig 2 but not in Fig.3 and Fig.4.\nA4: The \u2018ignored class\u2019 is represented as 2 in the training set (0 for background and 1 for nuclei). During training, the model outputs 0 or 1 for each pixel. Those pixels of label 2 were ignored when computing the loss. The results in Figures 3 and 4 are the predictions of the trained model, which don\u2019t contain the ignored class.\n\nQ5: The caption of Fig.1 could be clearer.\nA5: We didn\u2019t explain clearly in the caption due to the page limit. More details will be added in the camera-ready version.\n\nQ6: The best results could be highlighted in Table 1 for better readability.\nA6: Thanks for the suggestion. We will highlight the best results in the camera-ready version.\n\n"}, {"title": "Response to Reviewer2", "comment": "Thank you for the effort in reviewing our paper. We clarify the issues as follows:\n\nQ1: Any ideas to adapt the method to other tasks?\nA1: It could be adapted to other segmentation tasks, as long as the shapes of objects are not far from convex. Otherwise the Voronoi labels will contain many errors which is not good for training.\n\nQ2: How resilient is the method to \"forgotten\" nuclei? Could it be extended to work with only a fraction of the nuclei annotated?\nA2: We may try to use a fraction of annotations and see whether it will work well later. We expect it to work if the fraction of annotated nuclei locate in a certain area, because we can compute the Voronoi label only on that area and ignore all other areas. Otherwise, if we randomly select a fraction of nuclei in an image to annotate, the Voronoi labels will not work because the Voronoi edges computed by this part of nuclei are not accurate.\n\nQ3: Is the pretrained network really helping? Improving the final performance or speeding up convergence?\nA3: Yes. Pre-trained weights help improve the final performance according to our observations in the experiments. Both datasets are not large enough to train the networks well from scratch. However, the pre-training may be not necessary if large datasets are available according to the analysis in Kaiming et al.\u2019s recent work:\nHe, Kaiming, Ross Girshick, and Piotr Doll\u00e1r. \"Rethinking ImageNet Pre-training.\" arXiv preprint arXiv:1811.08883 (2018).\n\nQ4: Describe the differences (if any) between Tang et al. 2018 in Section 2.3.\nA4: There is no difference. We are the first to use it in weakly supervised nuclei segmentation.\n\nQ5: The formulation of AJI and object-level Dice should be put in the paper.\nA5: We didn\u2019t put the formulations in the manuscript due to the page limit. If more pages permitted, we will put them in the camera-ready version.\n\nQ6: Tang et al. 2018 was published at ECCV2018 and should be updated. Three papers (Pathak et al. ICCV2015, Rajchl et al. TMI2016, Kervadec et al. MIDL2018) should be included in the state-of-the-art section. Change the subtitles in figures 2,3,4 to improve readability.\nA6: Thanks for the suggestions. We will correct these in the camera-ready version.\n\n\n\n\n"}], "comment_replyto": ["H1xkWv8gx4", "rygFXzWL7V", "SklS9e_kXE", "S1lRhyyhGE"], "comment_url": ["https://openreview.net/forum?id=H1xkWv8gx4&noteId=BylpshRWNV", "https://openreview.net/forum?id=H1xkWv8gx4&noteId=S1gJMy0XEV", "https://openreview.net/forum?id=H1xkWv8gx4&noteId=H1eUAeCQEN", "https://openreview.net/forum?id=H1xkWv8gx4&noteId=Skea8XAQVV"], "meta_review_cdate": 1551356594245, "meta_review_tcdate": 1551356594245, "meta_review_tmdate": 1551881981825, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "This paper presents a straightforward and novel solution for nuclei segmentation by incorporating point annotations into CNN. The method is clearly described and the experimental results look reasonable. The authors have thoroughly replied to reviewers' questions and also provided additional experimental results thus addressing the critiques. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=H1xkWv8gx4&noteId=ryeqnMUBUE"], "decision": "Accept"}