AMSR / conferences_raw /midl19 /MIDL.io_2019_Conference_SklToCZ2J4.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
24.6 kB
{"forum": "SklToCZ2J4", "submission_url": "https://openreview.net/forum?id=SklToCZ2J4", "submission_content": {"title": "Iterative learning to make the most of unlabeled and quickly obtained labeled data in histology", "authors": ["Laxmi Gupta", "Barbara Mara Klinkhammer", "Peter Boor", "Dorit Merhof", "Michael Gadermayr"], "authorids": ["laxmi.gupta@lfb.rwth-aachen.de", "bklinkhammer@ukaachen.de", "pboor@ukaachen.de", "dorit.merhof@lfb.rwth-aachen.de", "michael.gadermayr@fh-salzburg.ac.at"], "keywords": ["Digital pathology", "convolutional neural networks", "kidney", "segmentation", "weakly supervised."], "TL;DR": "Weakly-, and un- supervised segmentation method for histological image data; training with minimum and imprecisely labeled data", "abstract": "Due to the increasing availability of digital whole slide scanners, the importance of image analysis in the field of digital pathology increased significantly. A major challenge and an equally big opportunity for analyses in this field is given by the wide range of tasks and different histological stains. Although sufficient image data is often available for training, the requirement for corresponding expert annotations inhibits clinical deployment. Thus, there is an urgent need for methods which can be effectively trained with or adapted to a small amount of labeled training data. Here, we propose a method for optimizing the overall trade-off between (low) annotation effort and (high) segmentation accuracy. For this purpose, we propose an approach based on a weakly supervised and an unsupervised learning stage relying on few roughly labeled samples and many unlabeled samples. Although the idea of weakly annotated data is not new, we firstly investigate the applicability to digital pathology in a state-of-the-art machine learning setting.", "pdf": "/pdf/6d0df04c9337d2014a7e0fda4610526eb343faf8.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "gupta|iterative_learning_to_make_the_most_of_unlabeled_and_quickly_obtained_labeled_data_in_histology", "_bibtex": "@inproceedings{gupta:MIDLFull2019a,\ntitle={Iterative learning to make the most of unlabeled and quickly obtained labeled data in histology},\nauthor={Gupta, Laxmi and Klinkhammer, Barbara Mara and Boor, Peter and Merhof, Dorit and Gadermayr, Michael},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=SklToCZ2J4},\nabstract={Due to the increasing availability of digital whole slide scanners, the importance of image analysis in the field of digital pathology increased significantly. A major challenge and an equally big opportunity for analyses in this field is given by the wide range of tasks and different histological stains. Although sufficient image data is often available for training, the requirement for corresponding expert annotations inhibits clinical deployment. Thus, there is an urgent need for methods which can be effectively trained with or adapted to a small amount of labeled training data. Here, we propose a method for optimizing the overall trade-off between (low) annotation effort and (high) segmentation accuracy. For this purpose, we propose an approach based on a weakly supervised and an unsupervised learning stage relying on few roughly labeled samples and many unlabeled samples. Although the idea of weakly annotated data is not new, we firstly investigate the applicability to digital pathology in a state-of-the-art machine learning setting.},\n}"}, "submission_cdate": 1544457892582, "submission_tcdate": 1544457892582, "submission_tmdate": 1561399484324, "submission_ddate": null, "review_id": ["BJxo3JPh7N", "SyghOQH2mN", "rygnYnQw7N"], "review_url": ["https://openreview.net/forum?id=SklToCZ2J4&noteId=BJxo3JPh7N", "https://openreview.net/forum?id=SklToCZ2J4&noteId=SyghOQH2mN", "https://openreview.net/forum?id=SklToCZ2J4&noteId=rygnYnQw7N"], "review_cdate": [1548672947207, 1548665716339, 1548332164470], "review_tcdate": [1548672947207, 1548665716339, 1548332164470], "review_tmdate": [1548856753817, 1548856751635, 1548856724050], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper5/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper5/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper5/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["SklToCZ2J4", "SklToCZ2J4", "SklToCZ2J4"], "review_content": [{"pros": "The submission suggests an approach for better utilization of the large amount of unlabeled data when applying deep learning methods to digital pathology slides. The suggested approach has two steps (1) train a CNN to segment glomeruli using bounding box segmentations as labels (2) predict segmentations on separate data and use these as labels for training another CNN to segment glomeruli.\nThe problem is highly relevant and I like the the idea of using the distribution of glomeruli shape and size as criteria for filtering segmentation suggestions in Stage 2.", "cons": "I have several issues with the submissions. Overall, I find the presentation confusing and unclear and it is possible that most of the issues arise from the lack of clarity.\nThe first paragraph of section 2.1 is a good example of what I find confusing and unclear. What characteristics are you exploiting? What is your approach very similar to? What are BBs? (Keeping track of custom abbreviations is tricky. I had to go back in the text, because I forgot what it referred to. )\n\nAnother example is Figure 5 where missing captions for the subplots makes it impossible to understand without reading the text simultaneously. Additionally, all four plots seem to have their own scaling and extent of the y-axis, making comparisons tricky.\n\nI also find it problematic that 4 out of 9 references are to your own work. Are you the only ones working on segmentation in digital pathology?\n\n\nThe main result is comparing a CNN trained on 8 images with bounding boxes (IS1) to a pair of CNNs trained on the same 8 images with bounding boxes + 9 images without bounding boxes (IS2). \nAs I understand it, Figure 5 (a) shows F-score for different combinations of number of images and number of annotations per image using images from IS1. The F-score is then calculated on five test images (IS3) with the best results being almost 0.90. You then use this result to select the number of images and number of annotations and retrain the model, but this time you get less 0.80 in F-score. Where does this difference come from?\nMore importantly, if my understanding is correct you have used performance on the test set (IS3) to select a model and then you report performance of this model on IS3. This is methodologically wrong, likely overestimates performance and invalidates your comparison.\n\nIf we ignore the above problems for a second, we are left with the conclusion that you get almost 0.90 F-score in Stage 0 where you train on IS1, and slightly lower F-score when you train on IS1 + IS2. So it seems best to just train on the bounding boxes. I think you might have used contour segmentations in Stage 0 (otherwise your conclusion does not make sense), but it is not clear to me that this is the case.\n\n\nIt is not clear what the contributions are in this submission. In the abstract you promise\n\"a method for optimizing the overall trade-off between (low) annotation effort and (high) segmentation accuracy\".\nI do not believe you deliver on this promise. You do not present a method that optimizes this trade-off. You conclude that combining bounding box segmentations with unlabeled data works (almost) as well as using contour segmentations. This just shows that we can reduce annotation effort \"for free\", but does not provide a method for optimizing the trade-off.\nI also find it unclear exactly how the proposed method is different from the referenced related work. In the introduction of the methods section you state that you adapt the method from Gademayr et al (2019), and promise details in section 3. I see no mention of this in section 3 and it is not clear to me what you have done. I have a similar problem in section 2.1 where you adapt the method in Khoreva et al. (2017), but you do not clearly state what is adapted.\nIt seems to me, that the actual main contribution is the constraints applied in stage 2 (Cues 2), size and shape of glomeruli, yet these are not clearly described. \n\n\nFinally, I do not agree with your statement in the conclusion that you \"work with noisy easy-to-collect labels\". As I understand it, you derive bounding boxes from contour segmentations by fitting the smallest rectangle that contains all of the contour segmentation. This implies that you have weak labels without noise and perfect accuracy and precision (assuming your ground truth segmentations are 100%, which they most certainly are not). What you lack is detail.\nI suggest you investigate how important the quality of bounding box segmentations is, by either using segmentations from multiple annotators or by adding random shifts and scaling.", "rating": "1: strong reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "This paper addresses the problem of histological image segmentation. As annotations in histological image are costly to obtain, the authors consider weakly supervised as well as unsupervised learning. Their approach is based on Khoreva's approach that leverages bounding boxes instead of precise pixelwise segmentations to feed a segmentation CNN. \n\nTheir proposal is a cascade of two segmentation models, that make use of 'cues', which are statistical rules applied on the area of the segmentation results. \nThe first model is trained with BB and iteratively improved thanks to the cues.\nThe second model is trained with the results of the first model and iteratively improved thanks to some other cues.\nExperiments include accuracy results depending on the number of iterations (for both stages), and comparison to a fully supervised network.\n\nThe paper is well written. It presents an interesting contribution to the weakly supervised segmentation of histological images. \n- Image patch size is set to 492. How was this value set?\n- Are the image processed in the RGB space?\n- Results are given on a patch basis, would it be possible to give some accuracy image-wise or glomeruli-wise ?\n\nMinor comments:\n- please specify what training set is used to train the fully supervised CNN, IS1 and IS2?\n- captions of Fig 5 could be improved (eg (a) stage 0, (b) stage 1 (c) and (d) stage 2). The general caption could also reflect better the figure content. Although not a big deal, this might help when skimming through the paper.\n- in Fig 4, one can see the nomber of FP decreasing, however one cannot distinguish the evolution of green/blue areas (too small).\n- typo in conclusion: automation (...) demandS", "cons": "The 'cues' are specific to the application at stake and rely on some hard-constrained statistics, which are assessed form the data. They seem be too constraining especially in Stage 2, as acknowledged by the authors.\n", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "This paper proposes an iterative two-stage approach based on weakly supervised and unsupervised training stages for histological image segmentation. The paper is clear, well organized, well written and easy to follow. The contributions are clearly stated and a thorough literature review is presented that allows a good insight into the stated contributions. Clever design choices are made such as clues 1 and 2 in stage 1 and 2, respectively, that allows improved promising performance. ", "cons": "Evaluation on the test images (IS3 - 5 WSIs) is missing and should be included to give more insight into the proposed approach.\n \nSection 4, Stage 2: Why sixth iteration of Stage 1 is used? Comment on this selection? And why 10th iteration is not used/reported where STD is comparatively low?\n\nFigure 5: Missing detailed caption making it difficult to follow the four plots. Explain what each plots are. \nFor each plot, use the same range for y-axis especially for (b), (c), (d) probably from 0.2 to 1.0. \n\nPage 9, line 1: The scores of Stage 2 (Fig. 5(b)) --> The scores of Stage 1 (Fig. 5(b)) ", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": ["H1go2Ekc4V", "H1eonxk5EN", "SkgVHgJ9NV"], "comment_cdate": [1549558963249, 1549557938724, 1549557820323], "comment_tcdate": [1549558963249, 1549557938724, 1549557820323], "comment_tmdate": [1555946019490, 1555946019233, 1555946019001], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper5/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper5/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper5/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Reply to reviewer", "comment": "We thank the reviewer for careful review of the manuscript and the valuable feedback.\n\nIt is unfortunate that the paper was not clear enough. From your review, it does seem that there have been misunderstandings. In the following, we hope to clarify your doubts.\n\nReply to general comments:\n- Our approach is similar to the one proposed by Khoreva et al. (2017) because we evaluate the applicability of the method proposed by the latter to histological data. We will rephrase the sentence here to improve readability.\n- We define the term \u2018bounding box (BB)\u2019 at the beginning of section 2 \u2018Methods\u2019 with the help of a figure (Fig. 3). As it occurs repeatedly in the paper, we think it would be cumbersome if we do not abbreviate this term. Nevertheless, we will also mention 'BB' in Fig. 3 in the revised manuscript to enable a quicker reference.\n- The high ratio of own citation is due to the fact that our work is most related (e.g. with focus on the same organ). But your concern is definitely justified! We will replace two of our citations with other references to state-of-the-art research on histopathology.\n\nReply to specific comments:\nQ. What is the main result comparing? Why is there a difference in F-Scores?\nNote: When using images IS2 (for stage 2), we do not use IS1 (with or without bounding boxes) at all. Please refer Fig. 2 (Schematic), and section 2.2, second line (\u2018Effectively,\u2026\u2019).\n\n- The 'main results' (Fig. 5) compare three different training settings with five images (IS3) as test images:\nFig. 5(a)- Result of training on IS1 (8 images) with \u2018correctly labeled data,\u2019 as explained in section 3. (setting for Stage 0). This forms our baseline (fully supervised training). \nFig. 5(b)- Result of training on IS1 with bounding boxes (Stage 1- weakly supervised method)\nFig. 5(c,d)- Result of training on IS2 (9 images, separate from those used in IS1)\n\nWith the best configuration obtained from Fig. 5a we retrain the model with only bounding boxes in Stage 1. That accounts for the difference in F-score (as you rightly point out in the your comment). We obtain F-score of 0.9 with fully supervised training with correctly labeled data (contour segmentations) and F-score of around 0.8 with bounding boxes. \n\nIn the revised manuscript, we intend to make the necessary changes (e.g. update Fig. 5) to improve clarity here. We will also rename IS1, IS2, and IS3 to IS-train1, IS-train2, and IS-test, respectively to avoid confusion. \n\nQ. Using IS3 to report performance is methodologically wrong.\n- We do agree with you that an evaluation with a larger set of annotated data needs to be performed in future. However this work serves as proof-of-concept; our methodology may overestimate performance, but does not invalidate comparison.\n\nQ. What are the contributions?\n- This work aims to contribute a method that provides high segmentation accuracy while maintaining low annotation efforts.\nIt is possible that the word \u2018optimize\u2019 used in this context is misinterpreted or misunderstood. By optimize here we mean \u2018to make the most of (available annotations),\u2019 similar to the title of the paper, while aiming at high segmentation accuracies. We will change this word to avoid misunderstanding. \n\nQ. How is the proposed work different from the related methods?\n- Gadermayr et al. (2019) uses precisely annotated images for developing a fully supervised segmentation model. We train with significantly less number of imprecise (bounding box) annotations. \nSection 3, second paragraph implicitly explains how we adapted Gadermayr et al. (2016) for our experiments. E.g. Gadermayr et al. (2019) uses all the glomeruli available on all the training images; we extract different number of patches from different number of training images (refer Fig. 5a). We optimize the epochs according to our training data.\nKhoreva et al. (2017) work with \u2018easy to segment\u2019 objects, e.g. horses on a green field. We evaluate the applicability of this method (Stage 1) on histological images to segment glomeruli, which are not easily distinguishable from their background tissue. (We motivate this in the last two paragraphs of section 1- Introduction).\nThe first cue mentioned as 'Cues 1' was introduced in our approach to suit histological data. Cue 3 (DenseCRF) from Khoreva et al. (2017) was omitted as it was not expected to work in histological images because of lack of proper object-background separation. \n\nQ. Shape and size of glomeruli are not described\n- Fig. 1 and Fig. 3 provide an example of the size and shape of the glomeruli (in reference to the patch-size). \n\nQ. The labels are not 'noisy,' but 'weak'\n- We use the word \u2018noise\u2019 to indicate the presence of false positives in the training data. Since it is easily misinterpreted, we propose to replace 'noise' by \u2018imprecise.\u2019 \n\nQ. Suggestions\n- Thank you for the suggestion. We will take this into account in the future work."}, {"title": "Reply to reviewer", "comment": "We thank the reviewer for careful review of the manuscript and the valuable feedback.\n\nQ.Image patch size is set to 492. How was this value set?\n- The patches to train the segmentation model were extracted according to the experimental settings proposed in Gadermayr et al. 2019, from which the model was adapted. This is important to enable a fair comparison of our method with the reference (fully supervised) approach. Accordingly, to keep the data consistent, the patch size was set to a fix value of 492 through all the experiments.\n\nQ. Are the image processed in the RGB space?\n- Yes. We will include this information in the revised manuscript in the experimental setting section.\n\nQ. Results are given on a patch basis, would it be possible to give some accuracy image-wise or glomeruli-wise ?\n- Due to the page limit, we find it difficult to report further evaluation on object or image level. More importantly, mean scores on images are expected to be very similar to the presented scores. Evaluations on object level pose further questions that may be complicated to address, e.g. how to handle objects which are not detected or false-positive objects.\n\nMinor Comments\nQ. please specify what training set is used to train the fully supervised CNN, IS1 and IS2?\n- This information is available in Section 3 'Image dataset & experimental setting.' \nTo avoid confusion, we intend to rename the training image set used for Stage 0 and Stage 1 as IS-train1 (currently IS1), Stage 2 as IS-train2 (currently IS2), and the test image set as IS-test (currently IS3). \n\nQ. captions of Fig 5 could be improved (eg (a) stage 0, (b) stage 1 (c) and (d) stage 2). The general caption could also reflect better the figure content. Although not a big deal, this might help when skimming through the paper.\n- Thank you for the hint. The caption be updated in the revised version of the manuscript. \n\nQ. in Fig 4, one can see the nomber of FP decreasing, however one cannot distinguish the evolution of green/blue areas (too small).\n- Fig. 4 is intended to provide an intuitive understanding of how the FP and FN of segmented objects change with every iteration. A quantitative evaluationis provided in Fig. 5 and the corresponding text. \n\nQ. typo in conclusion: automation (...) demandS\nThank you for the hint, this will be corrected it in the revision."}, {"title": "Reply to reviewer", "comment": "We thank the reviewer for careful review of the manuscript and the valuable feedback.\n\nQ. Evaluation on the test images (IS3 - 5 WSIs) is missing and should be included to give more insight into the proposed approach.\n- All the results presented in section 4 (Results) are based on five test images-IS3. To avoid confusion, we intend to rename the test image set as IS-test (currently IS3) in the revised manuscript. Likewise, the training images used in Stage 1 and 2 would be renamed as IS-train1 (currently IS1) and IS-train2 (currently IS2), respectively. Furthermore, we will also include a sentence in the Results section to make this more clear.\n\nQ. Section 4, Stage 2: Why sixth iteration of Stage 1 is used? Comment on this selection?And why 10th iteration is not used/reported where STD is comparatively low? \n- To train Stage 2 we chose the networks with the worst (first iteration) and the best (sixth iteration) performance in Stage 1. This is done to evaluate the effect of Stage 1 on the performance of Stage 2. \nConsidering higher iterations would only result in more computation time without performance improvement in Stage 2. Even though we fully agree with the reviewer that an even more extensive evaluation would be nice to have, the page limit would not allow us to report results from all the iterations. \n\nQ. Figure 5: Missing detailed caption making it difficult to follow the four plots. Explain what each plots are. \nFor each plot, use the same range for y-axis especially for (b), (c), (d) probably from 0.2 to 1.0. \n- Thank you for pointing this out. This was a mistake in the original submission. We agree that having the same y-axis range helps in comparison and will update the figures according to the suggestion. \n\nQ. Page 9, line 1: The scores of Stage 2 (Fig. 5(b)) --> The scores of Stage 1 (Fig. 5(b)) \n- This was an unfortunate mistake. Thank you for the careful observation. It would be corrected in the revised manuscript."}], "comment_replyto": ["BJxo3JPh7N", "SyghOQH2mN", "rygnYnQw7N"], "comment_url": ["https://openreview.net/forum?id=SklToCZ2J4&noteId=H1go2Ekc4V", "https://openreview.net/forum?id=SklToCZ2J4&noteId=H1eonxk5EN", "https://openreview.net/forum?id=SklToCZ2J4&noteId=SkgVHgJ9NV"], "meta_review_cdate": 1551356602123, "meta_review_tcdate": 1551356602123, "meta_review_tmdate": 1551881984183, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "In order to alleviate the challenges of acquiring large quantities of manually annotated digital pathology images, the authors suggest an approach for better utilization of unlabeled or only weakly annotated data. This is an important direction of research in the field of medical imaging and relevant for the conference. Two out of three reviewers recommend accepting the paper with high confidence. One reviewer however recommends rejection. One of the main concerns of the negative review regarding presentation of the paper being confusing and unclear is somewhat contradicted by the other reviewers commenting that the paper is well written and easy to follow. The authors have answered this criticism and most other critical points sufficiently. I am confident that the authors can address the negative comments in a minor revision of the paper.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=SklToCZ2J4&noteId=ByeM6GLB84"], "decision": "Accept"}