{"forum": "SkeBT7BxeV", "submission_url": "https://openreview.net/forum?id=SkeBT7BxeV", "submission_content": {"title": "Learning from sparsely annotated data for semantic segmentation in histopathology images", "authors": ["JM Bokhorst", "H Pinckaers", "P van Zwam", "I Nagtegaal", "J van der Laak", "F Ciompi"], "authorids": ["john-melle.bokhorst@radboudumc.nl", "hans.pinckaers@radboudumc.nl", "peter.vanzwam@radboudumc.nl", "iris.nagtegaal@radboudumc.nl", "jeroen.vanderlaak@radboudumc.nl", "francesco.ciompi@radboudumc.nl"], "keywords": ["Semantic segmentation", "Loss balancing", "Partially labelled data", "Weak supervision", "Class imbalance"], "TL;DR": "Semantic segmentation with sparsely annotated data", "abstract": "We investigate the problem of building convolutional networks for semantic segmentation in histopathology images when weak supervision in the form of sparse manual annotations is provided in the training set. We propose to address this problem by modifying the loss function in order to balance the contribution of each pixel of the input data. We introduce and compare two approaches of loss balancing when sparse annotations are provided, namely (1) instance based balancing and (2) mini-batch based balancing. We also consider a scenario of full supervision in the form of dense annotations, and compare the performance of using either sparse or dense annotations with the proposed balancing schemes. Finally, we show that using a bulk of sparse annotations and a small fraction of dense annotations allows to achieve performance comparable to full supervision.", "pdf": "/pdf/8338938b6dcfbe904eb31cd4b26a98744e954be2.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "bokhorst|learning_from_sparsely_annotated_data_for_semantic_segmentation_in_histopathology_images", "_bibtex": "@inproceedings{bokhorst:MIDLFull2019a,\ntitle={Learning from sparsely annotated data for semantic segmentation in histopathology images},\nauthor={Bokhorst, JM and Pinckaers, H and Zwam, P van and Nagtegaal, I and Laak, J van der and Ciompi, F},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=SkeBT7BxeV},\nabstract={We investigate the problem of building convolutional networks for semantic segmentation in histopathology images when weak supervision in the form of sparse manual annotations is provided in the training set. We propose to address this problem by modifying the loss function in order to balance the contribution of each pixel of the input data. We introduce and compare two approaches of loss balancing when sparse annotations are provided, namely (1) instance based balancing and (2) mini-batch based balancing. We also consider a scenario of full supervision in the form of dense annotations, and compare the performance of using either sparse or dense annotations with the proposed balancing schemes. Finally, we show that using a bulk of sparse annotations and a small fraction of dense annotations allows to achieve performance comparable to full supervision.},\n}"}, "submission_cdate": 1544733628745, "submission_tcdate": 1544733628745, "submission_tmdate": 1561398405646, "submission_ddate": null, "review_id": ["BygRaZomfE", "H1e1LGMRX4", "BkgNoSMyN4"], "review_url": ["https://openreview.net/forum?id=SkeBT7BxeV¬eId=BygRaZomfE", "https://openreview.net/forum?id=SkeBT7BxeV¬eId=H1e1LGMRX4", "https://openreview.net/forum?id=SkeBT7BxeV¬eId=BkgNoSMyN4"], "review_cdate": [1547051462391, 1548784199478, 1548850588336], "review_tcdate": [1547051462391, 1548784199478, 1548850588336], "review_tmdate": [1548856702936, 1548856681689, 1548856678815], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper100/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper100/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper100/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["SkeBT7BxeV", "SkeBT7BxeV", "SkeBT7BxeV"], "review_content": [{"pros": "The study looked into the utility of using of CNNs for semantic segmentation of histopathological images in colorectal cancer, with a focus on settings where training labels are available in the form of sparse manual annotations. \n\nOverall, the paper is well written, nicely structured, its methodology is well designed, and observations are clearly described.\n\nThe problem it aims to address is important, and has potential applications beyond histopathological images and colorectal cancer.", "cons": "Some areas for improvement are included below.\n\n*Abstract:\n----------------\n- \u201cWe propose to address this problem by modifying the loss function in order to balance the contribution of each pixel of the input data\u201d... This statement gives an initial impression to the reader that a major focus of the study is the modification of the loss function, where in fact this refers to an empirical evaluation of different strategies of assigning weights to pixel samples (which, in turn, contributes to the loss function). Please edit.\n\n*Introduction :\n----------------------\n-The authors build a good case for why the use of sparse labels is interesting. Reference to previous work in the literature is, however, rather limited.\n\n*Materials :\n-----------------\n- It would be good to mention details on variation within the colorectral cancer patient cohort analysed here, e.g. Does the number of images reported corresponding to one image per patient? how many malignant vs. benign cases? Age and sex distributions? etc. \n- What is the experience of the annotators with this kind of data? How familiar are the non-pathologists with medical images?\n- Please clarify if there is any overlap between the images on which sparse and dense annotations were carried out? (i.e. are some of the sparsely annotated images essentially a sub-set of the densely annotated ones?)\n- Please confirm that there are actually two training sets, two validation sets, and two test sets, for sparse and dense data respectively?\n\n*Method :\n---------------\n- The subheadings are confusing in this section. Perhaps some use of numbering can help organise related subheadings together. \n\n*Results, Discussion and Conclusion:\n------------------------------------------------------\n- One needs to be careful here before interpreting and drawing solid conclusions from the reported results. There are no confidence intervals associated with the reported Dice scores and it is difficult to really evaluate the levels of overlap between different models\u2019 performances. Since model robustness is not evaluated, and given that sampling effects may play a big role here, there might be different observations if the same analysis was carried out on a perturbed version of the dataset. These points need to be highlighted in the discussion.\n- Dice scores alone may not give a sufficient idea into how the models perform. Please comment on the need for additional metrics e.g. accuracy, sensitivity, specificity. \n- Please discuss limitations and what the next steps are for this research.\n\n", "rating": "4: strong accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct", "oral_presentation": ["Consider for oral presentation"]}, {"pros": "The authors proposed the use of sparse annotation data to train a U-net architecture for semantic segmentation of histopathology images. This is an interesting problem, since the training of deep neural networks usually requires a large amount of labeled images and the process of labelling is very time consuming. ", "cons": "The main problem of this paper is the weakness of the experiments. The method is validated only on 5 test images. In my opinion, this is also the reason of the instability of the results of Table 2 when the two different types of balancing strategies are compared. \nIt follows my detailed comments:\n-- In the introduction, there should be more references to other segmentation approaches that use sparsely annotated data.\n-- Why the percentage of pixels for each class change between the sparse and the dense annotations in Table 1? Why for some classes it increases and for some decreases? I think for a fair comparison between dense and sparse annotations the authors should keep these ratios more similar. \n-- Why the authors didn't apply a cross-validation analysis? This would have validated their method in a more robust way.\n-- It is not clear if the authors use the standard or modified U-net. What does it mean a 5 deep layer U-net? A figure of the network architecture could help the reader\n-- The authors should also show the results obtained with the dense annotated data and the two unbalancing strategies, at least for the mini-batch balancing. Although it\u2019s true that using dense annotations could solve the instance balancing problem, I don't understand why also the problem of mini-batch unbalancing is solved. \n-- A comparison with at least the method presented in [Xu et al., 2014] should be performed. \n", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "- The authors propose dedicated loss functions to train CNNs for segmentation tasks based on sparsely annotated data.\n- Clearly described methods which could be applied in a wide range of tasks as manual annotations are always difficult to obtain in medical imaging.\n- Comparison with densely annotated images shows comparable results.", "cons": "- Overall there seems to be a slight improvement over just using a mask of the annotated pixels, but this improvement is not clear for all segmentation classes. In figure 3 it can also be clearly observed that the improvements are very different for different tissues, which makes it difficult to evaluate the approach.", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["HJlXnAZs4N", "rJgXu1fo44"], "comment_cdate": [1549635243133, 1549635435026], "comment_tcdate": [1549635243133, 1549635435026], "comment_tmdate": [1555946011131, 1555946010877], "comment_readers": [["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper100/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper100/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Answer to AnonReviewer3", "comment": "A1: We acknowledge that test data used in this paper come from whole-slide images of 5 patients. Please realize that a single (gigapixel) whole-slide image consists of approximate size of 100,000 x 100,000 pixels. One such image per patient was selected. In each WSI, dense manual annotations were made in large regions of interest (average area = 0.571mm2 per region, with pixel size = 0.25 um/px), a very time consuming task that motivated the development of the research presented in this paper. As a result, 49 region of interest were manually (densely) annotated, from which 1,250 non overlapping tiles were used as test set. This represents the actual size of the test set. We realize that the way the test set is currently presented in the paper may be misleading. We will take care of providing the aforementioned details in a revised version of the manuscript.\n\nA2: We thank the reviewer for this comment, which motivated us to conduct a new literature search by narrowing the search criteria to sparse annotations and semantic segmentation under weak supervision. As a result, additional references were found mostly in computer vision / non medical field. Remarkably, we found an important work on sparse versus dense annotations, applied to CT images in radiology: B. Glocker et al., \u201cVertebrae Localization in Pathological Spine CT via Dense Classification from Sparse Annotations\u201d, MICCAI 2013. We will expand the section about the state of the art of our manuscript by including new references, as suggested by this reviewer.\n\nA3: The difference between the ratios is a direct result of the annotation strategy and is one of the arguments at the core of this paper. Dense annotations are made by first manually selecting regions of interest (typically rectangular regions) and successively drawing borders of all tissue types present in the region of interest. One main drawback of this strategy is that classes that are semantically underrepresented (e.g., nerves, small clusters of lymphocytes or erythrocytes) will be inevitably represented by a smaller percentage of pixels overall. The strength of using sparse annotations is in (1) a much less time-consuming procedure, and (2) the possibility of annotating non-rectangular regions which solely contain one tissue type. The latter property allows to select regions in order to compensate for skewed ratios of tissue amounts. This effect is visible in some of the classes considered in this approach. Additionally, during training, one can compensate for class imbalances in several ways, both at the level of patch sampling and at the level of pixel contribution to the loss function, which is one of the main contributions of this paper.\n\nA4: The test set consists of whole-slide images from 5 patients, which in the end resulted in 1,250 test patches. As described in the paper, one of the main focuses of this research is the possibility of training a model with sparse annotations and evaluating its performance on dense annotations. Since in the test set dense annotations are considered as the reference standard (rather than sparse annotations), a cross-validation approach would not be feasible, because of the implicit difference in annotated data.\n\nA5: We fully agree with this comment and will add a figure as well as additional details in the text of the paper to support the presented description of the network architecture. \n\nA6: We agree with the reviewer that those results should help a reader understand the performance of the proposed approach in the presence of dense annotations. We have conducted the experiments suggested by the reviewer and noticed no significant changes with respect to a plain approach using dense annotations. We will make sure that a revised version of this manuscript will contain information about this additional finding.\n\nA7: The main goal of this paper is to investigate different novel approaches to semantic segmentation when only sparse annotations, a combination of sparse + dense annotations, or only dense annotations are available. Several approaches to tackle class imbalance via sampling and contribution to the loss function are presented. Comparisons of most combinations of the aforementioned approaches are presented in the paper. Additional comparison with state of the art methods is out of scope for the current work, whose main purpose is to compare relative performance of several approaches to sparse and dense annotations.\n"}, {"title": "To all reviewers", "comment": "The authors would like to thank all reviewers for their comments. A more detailed answer to the comments of AnonReviewer3 is given below. "}], "comment_replyto": ["H1e1LGMRX4", "SkeBT7BxeV"], "comment_url": ["https://openreview.net/forum?id=SkeBT7BxeV¬eId=HJlXnAZs4N", "https://openreview.net/forum?id=SkeBT7BxeV¬eId=rJgXu1fo44"], "meta_review_cdate": 1551356594750, "meta_review_tcdate": 1551356594750, "meta_review_tmdate": 1551881982081, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "The authors propose a method for training models for semantic segmentation from sparsely annotated whole-slide histopathology images. To this end, the authors propose dedicated loss functions to learn from sparsely annotated data. \n\nTwo out of three reviewers recommend acceptance of the paper. Both comment on the clearly described methods and observations of the paper. The reviewer recommending rejection is mainly concerned about the experimental weaknesses of the paper, stating that only 5 test images have been used, causing instable results. \nThis comment has been rebutted by the authors. Because of the large size of whole-slide images, they can provide enough data to train deep models even if they just stem from 5 patients. The total number of cropped whole-slide image regions used in the paper is 49 regions of interest. Given that these are large slide images with much variation, from which 1,250 non overlapping tiles were used as the test set, the actual size actual size of the test set seems large enough to make some valid observations given the presented experiments.\n", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=SkeBT7BxeV¬eId=r1ej3MUr8N"], "decision": "Accept"}