File size: 20,619 Bytes
fad35ef
1
{"forum": "BkxJkgSlx4", "submission_url": "https://openreview.net/forum?id=BkxJkgSlx4", "submission_content": {"title": "Stain-transforming cycle-consistent generative adversarial networks for improved segmentation of renal histopathology", "authors": ["Thomas de Bel", "Meyke Hermsen", "Jesper Kers", "Jeroen van der Laak", "Geert Litjens"], "authorids": ["thomas.debel@radboudumc.nl", "meyke.hermsen@radboudumc.nl", "j.kers@amc.uva.nl", "jeroen.vanderlaak@radboudumc.nl", "geert.litjens@radboudumc.nl"], "keywords": ["Deep learning", "generative adversarial networks", "medical imaging", "stain transformation"], "TL;DR": "Adapted cycleGAN architecture great for stain conversion improving segmentation performance", "abstract": "The performance of deep learning applications in digital histopathology can deteriorate significantly due to staining variations across centers. We employ cycle-consistent generative adversarial networks (cycleGANs) for unpaired image-to-image translation, facilitating between-center stain transformation. We find that modifications to the original cycleGAN architecture make it more suitable for stain transformation, creating artificially stained images of high quality. Specifically,  changing the generator model to a smaller U-net-like architecture, adding an identity loss term, increasing the batch size and the learning all led to improved training stability and performance. Furthermore, we propose a method for dealing with tiling artifacts when applying the network on whole slide images (WSIs). We apply our stain transformation method on two datasets of PAS-stained (Periodic Acid-Schiff) renal tissue sections from different centers. We show that stain transformation is beneficial to the performance of cross-center segmentation, raising the Dice coefficient from 0.36 to 0.85 and from 0.45 to 0.73 on the two datasets.", "pdf": "/pdf/88964612deea7cd46447efa6db840e5edc48e90b.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "bel|staintransforming_cycleconsistent_generative_adversarial_networks_for_improved_segmentation_of_renal_histopathology", "_bibtex": "@inproceedings{bel:MIDLFull2019a,\ntitle={Stain-transforming cycle-consistent generative adversarial networks for improved segmentation of renal histopathology},\nauthor={Bel, Thomas de and Hermsen, Meyke and Kers, Jesper and Laak, Jeroen van der and Litjens, Geert},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=BkxJkgSlx4},\nabstract={The performance of deep learning applications in digital histopathology can deteriorate significantly due to staining variations across centers. We employ cycle-consistent generative adversarial networks (cycleGANs) for unpaired image-to-image translation, facilitating between-center stain transformation. We find that modifications to the original cycleGAN architecture make it more suitable for stain transformation, creating artificially stained images of high quality. Specifically,  changing the generator model to a smaller U-net-like architecture, adding an identity loss term, increasing the batch size and the learning all led to improved training stability and performance. Furthermore, we propose a method for dealing with tiling artifacts when applying the network on whole slide images (WSIs). We apply our stain transformation method on two datasets of PAS-stained (Periodic Acid-Schiff) renal tissue sections from different centers. We show that stain transformation is beneficial to the performance of cross-center segmentation, raising the Dice coefficient from 0.36 to 0.85 and from 0.45 to 0.73 on the two datasets.},\n}"}, "submission_cdate": 1544732631426, "submission_tcdate": 1544732631426, "submission_tmdate": 1561398041704, "submission_ddate": null, "review_id": ["BylgEMLnX4", "rklujIbimV", "BJlioA76mV"], "review_url": ["https://openreview.net/forum?id=BkxJkgSlx4&noteId=BylgEMLnX4", "https://openreview.net/forum?id=BkxJkgSlx4&noteId=rklujIbimV", "https://openreview.net/forum?id=BkxJkgSlx4&noteId=BJlioA76mV"], "review_cdate": [1548669479985, 1548584608513, 1548725922935], "review_tcdate": [1548669479985, 1548584608513, 1548725922935], "review_tmdate": [1548856752269, 1548856739269, 1548856688748], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper99/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper99/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper99/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["BkxJkgSlx4", "BkxJkgSlx4", "BkxJkgSlx4"], "review_content": [{"pros": "This paper presents a stain-transforming cycle-consistent GAN for improving image recognition in histopathology.\nThere are several contributions from this paper:\n1. Presenting the GAN method for stain transformation, which is straightforward for GAN methods. \n2. Introducing a modified overlapping strategy to remove tiling artifacts occured in patch sliding window approaches.\n3. Cross center tissue segmentation has been validated using stain transformation. \nOverall, this paper is well written and experimental results are extensively validated. \n", "cons": "For the evaluation metrics on stain transformation, it was not elaborately explained, including SSIM and Wasserstein distance. Please cite related references and provide detailed explaination;\nIn Table 1, the depth number affected the performance significantly, whether this is the same case for cycleGAN-baseline? \nThe results show that with stain transformation, the augmentation did not improve the segmentation performance. This is quite interesting. From my perspective, augmentation and stain transformation are kind of complementary. Please show more cross-validation or cross-center results to draw the conclusion. ", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "oral_presentation": ["Consider for oral presentation"]}, {"pros": "Although the technical novelty of the presented work is not high (using a CycleGAN to \"style transfer\" digital histopathology slide images), this paper is an excellent example for using an established science or method in an applicable manner for a medical application. They have done it while designing, running and presenting a very well-thought-out set of experiments and evaluation metrics. I just enjoyed reading the paper! The presented results show that the the proposed CycleGAN achieves better performance than the common solutions, while boosting the performance of the segmentation network. \n", "cons": "1. It's a known fact that in the medical field the lack of (training) data is a major limitation to evaluate new methods. However, in this work, the small number of centers (only two) and the fact that the method was only trained to transform 1=>2 and not vice versa is a big drawback. I urge the authors to collect more data, from more centers, while using cross validations to evaluate their method to the most.\n\n2. The authors did not compare their results to the other CycleGANs methods they cited (Gadermayr et al., 2018; Shaban et al.,\n2018).", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct", "oral_presentation": ["Consider for oral presentation"]}, {"pros": "The paper presents an interesting idea of using cycle-consistent GANs for stain transfer in histopathology.\n\nthere are several minor contributions presented by the authors to the histopathology image analysis (mostly applications):\n+ applying cycleGAN for stain transfer in histopathology for segmentation task;\n+ sliding through whole image to reduce tiling artifacts (which are apparent when performing this task in tile-by-tile approach);\n+ a limited but promising cross-center evaluation. \n\nThe paper is also clearly written, the structure and the content is easy to follow.", "cons": "-- it is not clear why Wasserstein distance is chosen as a quality assessement to measure differences between image histograms? Elaborate on this, or provide reference for this choice? Why not any other distance between histograms?\n\n-- it is also not clear why the full results of the segmentation network are not provided, only average Dice overlap between all classes is provided.\n\n-- the presented validation is probably sufficient for conference paper, but it would be very interesting to see comparison to the papers cited by the authors for stain transfer - (Shaban et al., 2018; Rivenson et al., 2018).\n\n-- the authors did only one way cross validation: AMC to RUMC transformation due to limited training segmentation, so the conclusions should be some-how scaled back to the claims that are sufficiently supported in the paper.\n\n-- no quantitative results on comparison between tile-by-tile approach and the proposed sliding through whole image approach. \n\nminor:\n - provide full form of WSI in the text (not only in abstract)\n- section 3.1: a gap between \"twenty four\" is missing\n- SSIM is not a metric in the mathematical sense, please clarify. \n- SSIM has also two parameters, provide them.\n- the authors use CycleGAN, cycleGAN - make it consistent though the paper\n\n", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["H1gljE4sEN", "B1eSTVEsNE", "ryxP_NViNE"], "comment_cdate": [1549644952198, 1549644988923, 1549644911144], "comment_tcdate": [1549644952198, 1549644988923, 1549644911144], "comment_tmdate": [1555945971190, 1555945970971, 1555945970685], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper99/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper99/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper99/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "response to reviewer two", "comment": "We would like to thank reviewer three for the review and constructive commentary. We will address each concern individually.\n1. For the evaluation metrics on stain transformation, it was not elaborately explained, including SSIM and Wasserstein distance. Please cite related references and provide detailed explaination; \nWe added citations for the SSIM and Wasserstein distance histogram comparison. (Image quality assessment: from error visibility to structural similarity, Wang et al.) (An efficient earth mover's distance algorithm for robust histogram comparison, Ling et al.).\n2. In Table 1, the depth number affected the performance significantly, whether this is the same case for cycleGAN-baseline? \nThe architectures with lower depths were briefly explored to test the limit of reducing the network size, while keeping a good performance. We felt that it was better to add these results rather than to omit them. More extensive testing of different depths is warranted, but we decided that this would be out of scope for this paper, also concerning the soft page limit. We didn\u2019t perform the same experiments for the original cycleGAN, but we expect the same performance decrease when lowering the amount layers. There are many parameters, like depth, which affect the performance of the cycleGAN, however extensively exploring this was not within the scope of this research.\n3. The results show that with stain transformation, the augmentation did not improve the segmentation performance. This is quite interesting. From my perspective, augmentation and stain transformation are kind of complementary. Please show more cross-validation or cross-center results to draw the conclusion. \nWe also expected stain transformation to be complementary to augmentation as well. It turned out not to be the case for our dataset in the direction of AMC -> Radboudumc. Based on the commentary by all reviewers, we decided to perform the experiments in the direction Radboudumc -> AMC as well. Due to high overlap of commentary between all reviewers, we address this in the 4th point of reviewer one. \nWe provide a link to the revised paper: \nhttps://drive.google.com/file/d/16mrio1OoiChC9Daq5bVH1aHHyjDGqOyL/view?usp=sharing\n"}, {"title": "response to reviewer three", "comment": "We would like to thank reviewer three for the review and constructive commentary. We will address each concern individually.\n1. It's a known fact that in the medical field the lack of (training) data is a major limitation to evaluate new methods. However, in this work, the small number of centers (only two) and the fact that the method was only trained to transform 1=>2 and not vice versa is a big drawback. I urge the authors to collect more data, from more centers, while using cross validations to evaluate their method to the most. \nWe agree with all reviewers that evaluation on multiple centers is needed for better validation, but in the timespan of the rebuttal phase, are unable to collect data from a third or fourth center. However (based on the commentary of reviewers one and 2 as well) we did perform the 2 => 1 transform direction (from Radboudumc to AMC). The results for this are added to the paper.  Due to the high overlap in commentary of all three reviewers on this point we refer to point 4 of reviewer one for more elaboration.\n2. The authors did not compare their results to the other CycleGANs methods they cited (Gadermayr et al., 2018; Shaban et al., 2018).\nBoth cited methods diverge little to the original cycleGAN implementation. We chose to compare only with the original cycleGAN approach. We felt that a comparison of cycleGAN (or classical) methods for stain transformations was out of the scope of this paper, as we mainly wanted to focus on the added benefit of stain transformation and its comparison with augmentation for segmentation performance. Comparison of stain conversion using different cycleGAN / classical machine learning approaches is something that we would like to address in future research. We added a pointer to this possible research direction in our discussion:\n\u201cFinally, comparing different stain transformation methods, both other cycleGAN and classical machine learning approaches, will be an interesting venue to explore in future research.\u201d\nWe provide a link to the revised paper: \nhttps://drive.google.com/file/d/16mrio1OoiChC9Daq5bVH1aHHyjDGqOyL/view?usp=sharing\n"}, {"title": "response to reviewer one", "comment": "Response to reviewer one\nWe would like to thank reviewer one for the review and constructive commentary. We will address each concern individually.\n1. it is not clear why Wasserstein distance is chosen as a quality assessement to measure differences between image histograms? Elaborate on this, or provide reference for this choice? Why not any other distance between histograms? \nWe added citations of the relevant papers on SSMI and Histogram Wasserstein Distances as justification for our choices (Image quality assessment: from error visibility to structural similarity, Wang et al.) (An efficient earth mover's distance algorithm for robust histogram comparison, Ling et al.). \n2. it is also not clear why the full results of the segmentation network are not provided, only average Dice overlap between all classes is provided:\nThe results were presented in its current form to comply to the soft limit of eight pages. We added an appendix (C) to provide dice scores for the individual classes.\n3. the presented validation is probably sufficient for conference paper, but it would be very interesting to see comparison to the papers cited by the authors for stain transfer - (Shaban et al., 2018; Rivenson et al., 2018). \nWe felt that a comparison of cycleGAN or classical machine learning methods for stain transformations was out of the scope of this paper (also considering the soft page limit), as we mainly wanted to focus on the added benefit of stain transformation and its comparison with augmentation for segmentation performance. However, the cited cycleGAN stain transformation articles don\u2019t deviate much from the original cycleGAN paper, so we feel that our comparison to the original comes in as a close second to direct comparison of the different stain cycleGAN approaches. A more elaborate comparison is something we would like to perform in future research. We added a pointer to this possible research direction in our discussion.\n4. the authors did only one way cross validation: AMC to RUMC transformation due to limited training segmentation, so the conclusions should be some-how scaled back to the claims that are sufficiently supported in the paper. \nBased on the commentary of all reviewers, we decided to also add the RUMC -> AMC direction, despite the small amount of annotations for training a segmentation network from the AMC dataset. \nSummary of the scores: \n- no augmentation / no stain conversion: 0.46\n- augmentation / no stain conversion: 0.71\n- no augmentation / stain conversion: 0.71\n- augmentation / stain conversion: 0.73\nInterestingly, using only augmentation or only stain conversion leads to approximately the same results. Using both augmentation and stain conversion gives a slight edge, showing that both approaches complement each other to some extent. This is more in line with the expectations we had up front as well. Here we see that the scores overall turn out slightly lower, which could be attributed to the low amount of annotations in the AMC dataset for training the segmentation algorithm. The low amount of annotations may also be the reason for the increased effectiveness of training with augmentations. We addressed the results of these added experiments accordingly in our discussion section (we provide a link to the revised paper at the bottom).\n5. no quantitative results on comparison between tile-by-tile approach and the proposed sliding through whole image approach.\nThis is something we would like to address more extensively in future work.\n6. provide full form of WSI in the text (not only in abstract)\nAdded full form to first mention of whole slides images (WSIs) in text\n7. section 3.1: a gap between \"twenty four\" is missing \nfixed\n8. SSIM is not a metric in the mathematical sense, please clarify. \nChanged the name \u2018metric\u2019 to \u2018performance measure\u2019.\n9. SSIM has also two parameters, provide them. \nAdded a sentence informing on the parameters.\n10. the authors use CycleGAN, cycleGAN - make it consistent though the paper\nChanged \u201cCycleGAN\u201d to lowercase c where the word is not start of a sentence\nWe provide a link to the revised paper: \nhttps://drive.google.com/file/d/16mrio1OoiChC9Daq5bVH1aHHyjDGqOyL/view?usp=sharing"}], "comment_replyto": ["BylgEMLnX4", "rklujIbimV", "BJlioA76mV"], "comment_url": ["https://openreview.net/forum?id=BkxJkgSlx4&noteId=H1gljE4sEN", "https://openreview.net/forum?id=BkxJkgSlx4&noteId=B1eSTVEsNE", "https://openreview.net/forum?id=BkxJkgSlx4&noteId=ryxP_NViNE"], "meta_review_cdate": 1551356589494, "meta_review_tcdate": 1551356589494, "meta_review_tmdate": 1551881976449, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "The reviewers agree that it is interesting to see how weakly supervised synthesis can better bridge the gap between datasets / centers, compared with plain intensity augmentation. In my opinion, this is the very interesting point that the authors should emphasize in the abstract (i.e., how Dice goes up from .78 to .85), rather than the (in my opinion)  lesser contributions of the identity loss or (especially) smoothing the transition across tiles. I also missed a comparison with a simple model, e.g., based on optimizing an intensity mapping combining the transforms illustrated in Figure 1, in order to minimize the histogram distance with the source domain. I also encourage the authors to incorportate the reviewers' comments, to further increase the quality of the manuscript.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=BkxJkgSlx4&noteId=HkxH2GUrUN"], "decision": "Accept"}