File size: 21,459 Bytes
fad35ef
1
{"forum": "Byg6tbleeE", "submission_url": "https://openreview.net/forum?id=Byg6tbleeE", "submission_content": {"title": "VOCA: Cell Nuclei Detection In Histopathology Images By Vector Oriented Confidence Accumulation", "authors": ["Chensu Xie", "Chad M. Vanderbilt", "Anne Grabenstetter", "Thomas J. Fuchs"], "authorids": ["xic3001@med.cornell.edu", "vanderbc@mskcc.org", "grabensa@mskcc.org", "fuchst@mskcc.org"], "keywords": [], "abstract": "Cell nuclei detection is the basis for many tasks in Computational Pathology ranging from cancer diagnosis to survival analysis. It is a challenging task due to the significant inter/intra-class variation of cellular morphology. The problem is aggravated by the need for additional accurate localization of the nuclei for downstream applications. Most of the existing methods regress the probability of each pixel being a nuclei centroid, while relying on post-processing to implicitly infer the rough location of nuclei centers. To solve this problem we propose a novel multi-task learning framework called vector oriented confidence accumulation (VOCA) based on deep convolutional encoder-decoder. The model learns a confidence score, localization vector and weight of contribution for each pixel. The three tasks are trained concurrently and the confidence of pixels are accumulated according to the localization vectors in detection stage to generate a sparse map that describes accurate and precise cell locations. A detailed comparison to the state-of-the-art based on a publicly available colorectal cancer dataset showed superior detection performance and significantly higher localization accuracy.", "pdf": "/pdf/bca12e1db18a41bee200c1ac4a88f412b34c0404.pdf", "code of conduct": "I have read and accept the code of conduct.", "paperhash": "xie|voca_cell_nuclei_detection_in_histopathology_images_by_vector_oriented_confidence_accumulation", "_bibtex": "@inproceedings{xie:MIDLFull2019a,\ntitle={{\\{}VOCA{\\}}: Cell Nuclei Detection In Histopathology Images By Vector Oriented Confidence Accumulation},\nauthor={Xie, Chensu and Vanderbilt, Chad M. and Grabenstetter, Anne and Fuchs, Thomas J.},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=Byg6tbleeE},\nabstract={Cell nuclei detection is the basis for many tasks in Computational Pathology ranging from cancer diagnosis to survival analysis. It is a challenging task due to the significant inter/intra-class variation of cellular morphology. The problem is aggravated by the need for additional accurate localization of the nuclei for downstream applications. Most of the existing methods regress the probability of each pixel being a nuclei centroid, while relying on post-processing to implicitly infer the rough location of nuclei centers. To solve this problem we propose a novel multi-task learning framework called vector oriented confidence accumulation (VOCA) based on deep convolutional encoder-decoder. The model learns a confidence score, localization vector and weight of contribution for each pixel. The three tasks are trained concurrently and the confidence of pixels are accumulated according to the localization vectors in detection stage to generate a sparse map that describes accurate and precise cell locations. A detailed comparison to the state-of-the-art based on a publicly available colorectal cancer dataset showed superior detection performance and significantly higher localization accuracy.},\n}"}, "submission_cdate": 1544712580551, "submission_tcdate": 1544712580551, "submission_tmdate": 1561399770831, "submission_ddate": null, "review_id": ["BJxl-BooQV", "B1xE1wQjQN", "B1llMpIgQN"], "review_url": ["https://openreview.net/forum?id=Byg6tbleeE&noteId=BJxl-BooQV", "https://openreview.net/forum?id=Byg6tbleeE&noteId=B1xE1wQjQN", "https://openreview.net/forum?id=Byg6tbleeE&noteId=B1llMpIgQN"], "review_cdate": [1548625144211, 1548592860368, 1547885832095], "review_tcdate": [1548625144211, 1548592860368, 1547885832095], "review_tmdate": [1548856745863, 1548856740809, 1548856711778], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper51/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper51/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper51/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["Byg6tbleeE", "Byg6tbleeE", "Byg6tbleeE"], "review_content": [{"pros": "The method presents a a method for cell detection in H&E stained histopathology images based on convolutional networks.\nThe model predicts three maps, which are then combined and post-processed to get the predicted location of cells.\n\nThe paper is well written and authors show that their method outperforms state of the art approaches on a public dataset of manually annotated cells in colon cancer, and claim that their method is faster than other methods that address the same task.\n\nIn my opinion, the main contribution over previously presented methods is the introduction of the wt_map, because something equivalent to a combination of conf_map and loc_map was already present in other works, such as Sirinukunwattana et al. (2016). Additionally, the formulation of the problem as a multi-task approach is novel in this context, to the best of my knowledge.", "cons": "The proposed method shows improvements over state of the art approaches, but it is only tested on one single dataset, and only limited to H&E staining. It would be interesting to show, or comment, whether the same method would work for immunohistochemistry as well, where stain artefacts are presents, and detection of cells grouped in clusters is challenging.\n\nAdditionally, only examples of positive results are reported. Reported F1-score is good, but not perfect. This should be discussed, for example show cases with failure, if there are common causes of failure, how to address them, and whether this relates to an imperfect reference standard. Would also be good to compare with one of the methods based on region proposal mentioned in the introduction, such as Faster r-cnn, or YOLO, which showed pretty good performance at lymphocyte detection (only limited to IHC) at MIDL 2018 (M. van Rijthoven et al., 2018).\n\nAbout quantitative performance, the authors claim that the performance \"largely improved\": from 0.879 to 0.886, and from 0.882 to 0.887. Is this considered a large improvement in this setting?\n\nThree maps are produced and combined to create an accumulator map, which is post-processed in order to obtain the final detections. Since the three maps are generated for the training set as well, did the author check whether this gives F1-score = 1.0 on the training set? I guess it does, but it could be that the contribution of Wt underweights some locations and lowers their final score. If this is the case, it would be good to assess the performance of the post-processing step on the training set as well (without using the model), which could be a good indication to understand the upper bound of the performance of this approach.\n\n\nOther comments:\n\n* The caption of Table 1 should be improved, it does not describe what is in the table (the description is in the text though).\n\n* How is the average accuracy computed? Only single pixels manually annotated as foreground and the rest as background? And how are the weights computed, if used, and the scores averaged? This is not clear from the text.\n\n* What type of functions are the losses?\n\n* lambda_1 and lambda_2 are introduced but then set to 1, authors could consider removing them from the formula. I acknowledge they mention investigating this effect in the future, but I wonder what is the utility of these two parameters in this paper.\n\n* A receptive field equivalent to the size of a single cell is used; would a slightly larger receptive field improve the performance, allowing to include some more context?\n\n* The architecture relies on an encoder-decoder model, but skip connections used in the U-Net architecture are not used here. Was this a specific design choice, and would using skip connection improve the performance of the method?\n", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "\n- The paper is well-written, and easy to read and understand.\n\n- The authors consider the problem of nuclei detection, and propose to decompose the task into three subtasks, trying to predict the confidence map, localization map and a weight map.\n\n- I think the effort of disentangling a complicated task into simpler ones makes sense, and the experiments have shown promising results.", "cons": "\n- In my view, the proposed methods are not completely novel, I think the authors are suggested to cite them, just name a few.\n\n- Predicting the confidence map with fully convolutional networks was initially done by :\n\"Microscopy Cell Counting with Fully Convolutional Regression Networks\", W. Xie, J.A. Noble, A. Zisserman, In MICCAI 2015 Workshop.\n\n- The proposed localisation map is actually the result of distance transform, and has been initially used in : \n\"Counting in The Wild\", C. Arteta, V. Lempitsky, A. Zisserman, In ECCV 2016.\n\n\n", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "\n1. The method part is well-written and easy to follow.\n\n2. The authors formulate the problem into a multi-task learning framework which regresses centroid location, confidence map and classifies pixel-wise label simultaneously. The method makes sense that high-correlated subtasks benefit learning mutual information.\n\n3. The vector oriented confidence accumulation generates the accumulator map with the sparse response. It reduces the sensitivity of hyperparameter-radius' value in NMS, which may benefit in final prediction especially in densely nuclei cases where the proper radius value is hard to define.\n\n4. The experiment results are good on a publicly available dataset which is persuasive. \n\n5. Paper is also clear about reporting hyper-parameters for reproducibility. ", "cons": "The experiment analysis is not clear in some parts. \n1. In Sec 5.1, the authors use the pixel-wise evaluation metric. Though it explains the mutual benefit to some extent, it is best to also provide the results on final metrics (F1, Median Distance).\n\n2. There is a mistake in the explanation (the second paragraph, Sec. 5.1). The smooth-L1 loss is a combination of L1 and  L2 loss which is robust to the outliers. It is an L1 loss when the value is larger than the threshold, while it is L2 loss if value small than the threshold.  \n\n3. Fig.2 shows the probability map of the regression method. But the analysis is not very correlated to this image. Instead, it is better to show the comparison of (Conf+Loc+Wt) and (Conf+Loc), since the explanation is still ambiguous. \n\n\n\nQ: The L_loc value is about to equal to 4 according to the experiment, which seems to be a large value. How about the magnitude for L_conf and L_wt, since \\lambda_{1} and \\lambda_{2} are set to 1 in loss calculation? Will L_loc dominate the direction of optimization? \n\n", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": ["SyxefULC44", "Bkl21G0C4N", "ByglurIC4V", "HkeJ08ICVE"], "comment_cdate": [1549850119934, 1549881827918, 1549849960204, 1549850311476], "comment_tcdate": [1549850119934, 1549881827918, 1549849960204, 1549850311476], "comment_tmdate": [1555945974792, 1555945968863, 1555945967063, 1555945953927], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper51/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper51/AnonReviewer1", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper51/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper51/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Response to AnonReviewer1", "comment": "We thank Reviewer 1 for the suggestions on related work.\n\n1.)  We should have emphasized that our innovation is not about the network backbone but the objective disentangling and confidence accumulation (cf. comment 1 to Reviewer 2, pros 1 and 2 of Reviewer 3).  Using FCN-like structure for nuclei detection and segmentation has been the trend in the field.  We are very thankful for you pointing us to the origin of this design and will cite this paper in the final version!\n\n2.)   The  \u201dCounting  in  The  Wild\u201d  paper  proposed  a  very  interesting  multi-task  learning  approach  for  counting  objects.   The  \u201ddistance  transform\u201d  from\u201dCounting in The Wild\u201d only gives the distance.  Actually, many of the related works  we  mentioned  in  our  paper  came  up  with  ways  to  use  the  distance  to generate  supervision  (cf.   comment  1  to  Reviewer  2).   In  contrast  to  these, VOCA learns a localization vector with both the direction and the magnitude of the distance. This localization vector allows us to do confidence accumulation, resulting in a very sparse response in the accumulator maps."}, {"title": "Dataset Suggestion", "comment": "For public dataset, you may check : \nhttps://github.com/ieee8023/countception\n"}, {"title": "Response to AnonReviewer2", "comment": "We are grateful to Reviewer 2 for the positive and constructive comments.  We incorporated the feedback by improving the text of the final paper for the addressed issues. We are convinced that these changes enhanced the overall quality of our paper.  We want to address the following points in detail:\n\n1.) Regarding the main contribution:\nObject detection at its heart is the combination of object recognition and localization.  Previous, related works (Ciresan et al., 2013; Wang et al., 2014; Xie et al., 2015; Chen and Srinivas, 2016; Sirinukunwattana et al., 2016; Zhou et al.,2017; Raza et al., 2018) all embraced this and tried to integrate these two tasks. Their regression targets already embedded the localization information by formulating the score as a function of the distance to the center of a ground truth nucleus.  Our contribution, as mentioned by Reviewer 2, is\u201dthe formulation of the problem as a multi-task approach\u201d, which means to disentangle rather than to integrate.  Our motivation was to disentangle the multiple tasks into simpler objectives to improve model training and understanding.\nIn addition, by learning the localization vectors, we were able to move and accumulate the confidence scores to the target locations to generate accumulator maps with sparse response, which was never considered in previous and related work.\n\n2.) Dataset choice:\nWe deliberately chose the publicly available dataset released by Sirinukunwattana et al., 2016 since many of the related works are benchmarked against this dataset.  We did apply the proposed method on internal datasets as well,  for example for detecting tumor-infiltrating lymphocyte on breast and lung cancer, and consistently found that VOCA outperformed the peak regression methods(noted as PR in our paper).  Due to space limitations, we focused on the algorithm and not the applications and decided to compare the models performance to an established standard.  The mentioned independent datasets could be used in future work to establish clinical utility based on patient outcome.  We hypothesize  that  for  immunohistochemistry  (IHC)  stained  slides,  VOCA  should exhibit comparable performance, but this has to be tested in future work.  We understand the challenges of varied staining patterns and artifacts in IHC. Our focus on H&E is guided by the fact that all biopsies and resections are stained with H&E and IHC is only used on ad hoc basis special cases.\n\n3.) Labeling noise:\nA brief review by our pathologist authors on the reference standard found cases of incorrect annotations but the errors do not dominate the dataset.  We recognize the task of exhaustively labeling individual cells is challenging and imprecise as we have experienced in generating our own datasets.\n\n4.) On region proposal methods:\nWe are skeptical on the merits of comparing VOCA to region proposal based methods since nuclei detection is defined with point labels (unless one generated arbitrary bounding boxes as in M. van Rijthoven et al., 2018).  Artificially increasing  the  model  complexity  from  labeled  coordinates  to  bounding  boxes clearly  violates  Occam\u2019s  Razor.   That  said,  Rijthoven  et  al.   focused  on  IHC which might exhibit unique properties that justify increasing the complexity.\n\n5.) Performance Accuracy:\nWe  regarded  the  pixel-wise  classification  accuracy  gain  from Conf to Conf+Loc as good improvement compared to other optimization of the pipeline. We will reformulate the description as suggested.\n\n6.) Upper Bound:\nExploring  the  upper  bound  of  the  approach  is  an  excellent  suggestion.   Post-processing the training maps did in fact result in a F1-score of 1.  The only ambiguous cases are pixels located at exactly the same distance to multiple ground truth  nuclei.   These  rare  cases  were  randomly  assigned  to  one  of  the  ground truth nuclei.  We found negligible drop in recall even with a very high threshold.\n\n7.) Other Comments:\nComputing  the  pixel-wise  accuracy  and  the  weights  is  dependent  on  the  hyperparameterr.  This is explained in section 3.1 and 5.1., but we will improve the  description  in  the  final  version.   The  loss  functions  are  described  at  the end  of  section  3.1. \u03bb1 and \u03bb2 can  be  used  to  weight  the  contribution  of  the 3 losses during training.  The receptive field was chosen based on performance(not shown). Although each neuron in the last encoding block has receptive field of 16\u00d716, neurons on this and deeper levels are looking at different positions including cellular features and context.  Empirically, skip connections did not improve the performance of the proposed model."}, {"title": "Response to AnonReviewer3", "comment": "We appreciate the helpful comments from Reviewer 3.  The listed supporting points helped us to improve the contribution description in the final paper significantly.  In the following we address the contra points:\n\n1.)  We considered measuring the final metrics for conf and loc configurations but  there  were  some  concerns.   Doing  confidence  accumulation  with  only loc maps  would  require  more  parametrization.   For  example,  the  question  if  the confidence of all pixels should be regarded as 1.  Detection of local maxima on conf maps is challenging because the predictions are not peaked but actually rather flat as shown in Figure 1.  For future work we will investigate other options to measure the final metrics for these two cases.\n\n2.)  Thanks for pointing out the mistake,  which will be corrected in the final version.  The smooth-L1 loss is averaged over all pixels.  Most pixels have an error>1 (cf.  Q1 distance in Table2), which is the threshold, therefore most of them are L1 loss. \n\nQ.)  Thanks for pointing out this inconsistent observation! We double checked our experiments and realized that the loss is actually summing both the x and y channels, therefore the L_loc for each dimension is about 2 pixels. During training, L_conf and L_wt converges about 3 times faster than L_loc.  We observed that L_loc is the more challenging objective, but it does not dominate the gradient.  We tried different \u03bb1:  0.1, 1, and 10 but 1 resulted in the best performance.  We will add this information to the final version."}], "comment_replyto": ["B1xE1wQjQN", "SyxefULC44", "BJxl-BooQV", "B1llMpIgQN"], "comment_url": ["https://openreview.net/forum?id=Byg6tbleeE&noteId=SyxefULC44", "https://openreview.net/forum?id=Byg6tbleeE&noteId=Bkl21G0C4N", "https://openreview.net/forum?id=Byg6tbleeE&noteId=ByglurIC4V", "https://openreview.net/forum?id=Byg6tbleeE&noteId=HkeJ08ICVE"], "meta_review_cdate": 1551356588844, "meta_review_tcdate": 1551356588844, "meta_review_tmdate": 1551881981089, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "The reviewers seem to agree that there is value in the multi-task formulation explored in this paper, particularly with the $W_t$ map. I do agree with Reviewer 1 that the authors should discuss the fact that their improvement over the competing methods is marginal (which is fine), rather than stating that the improvement is considerable. I also encourage the authors to incorporate the many informative comments from the reviewers in the final version.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=Byg6tbleeE&noteId=HyxHhz8SLE"], "decision": "Accept"}