File size: 16,287 Bytes
fad35ef
1
{"forum": "Hkx5C9QeeN", "submission_url": "https://openreview.net/forum?id=Hkx5C9QeeN", "submission_content": {"title": "Dense Segmentation in Selected Dimensions: Application to Retinal Optical Coherence Tomography", "authors": ["Bart Liefers", "Cristina Gonzalez-Gonzalo", "Caroline Klaver", "Bram van Ginneken", "Clara I. Sanchez"], "authorids": ["bart.liefers@radboudumc.nl", "cristina.gonzalezgonzalo@radboudumc.nl", "caroline.klaver@radboudumc.nl", "bram.vanginneken@radboudumc.nl", "clara.sanchezgutierrez@radboudumc.nl"], "keywords": ["Segmentation", "Retina", "OCT"], "TL;DR": " We propose a novel convolutional neural network architecture specifically designed for dense segmentation in a subset of the dimensions of the input data. ", "abstract": "We present a novel convolutional neural network architecture designed for dense segmentation  in  a  subset  of  the  dimensions  of  the  input  data.   The  architecture  takes  an N-dimensional image as input, and produces a label for every pixel in M output dimensions, where 0< M < N.  Large context is incorporated by an encoder-decoder structure, while funneling shortcut subnetworks provide precise localization.  We demonstrate applicability of the architecture on two problems in retinal optical coherence tomography:  segmentation of geographic atrophy and segmentation of retinal layers.  Performance is compared against two baseline methods, that leave out either the encoder-decoder structure or the shortcut subnetworks.  For segmentation of geographic atrophy, an average Dice score of 0.49\u00b10.21 was obtained, compared to 0.46\u00b10.22 and 0.28\u00b10.19 for the baseline methods, respectively. For the layer-segmentation task, the proposed architecture achieved a mean absolute error of 1.305\u00b10.547 pixels compared to 1.967\u00b10.841 and 2.166\u00b10.886 for the baseline methods.", "pdf": "/pdf/471b693e5d95caa4d0f5df118e19a4453bad1605.pdf", "code of conduct": "I have read and accept the code of conduct.", "paperhash": "liefers|dense_segmentation_in_selected_dimensions_application_to_retinal_optical_coherence_tomography", "_bibtex": "@inproceedings{liefers:MIDLFull2019a,\ntitle={Dense Segmentation in Selected Dimensions: Application to Retinal Optical Coherence Tomography},\nauthor={Liefers, Bart and Gonzalez-Gonzalo, Cristina and Klaver, Caroline and Ginneken, Bram van and Sanchez, Clara I.},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=Hkx5C9QeeN},\nabstract={We present a novel convolutional neural network architecture designed for dense segmentation  in  a  subset  of  the  dimensions  of  the  input  data.   The  architecture  takes  an N-dimensional image as input, and produces a label for every pixel in M output dimensions, where 0{\\ensuremath{<}} M {\\ensuremath{<}} N.  Large context is incorporated by an encoder-decoder structure, while funneling shortcut subnetworks provide precise localization.  We demonstrate applicability of the architecture on two problems in retinal optical coherence tomography:  segmentation of geographic atrophy and segmentation of retinal layers.  Performance is compared against two baseline methods, that leave out either the encoder-decoder structure or the shortcut subnetworks.  For segmentation of geographic atrophy, an average Dice score of 0.49{\\ensuremath{\\pm}}0.21 was obtained, compared to 0.46{\\ensuremath{\\pm}}0.22 and 0.28{\\ensuremath{\\pm}}0.19 for the baseline methods, respectively. For the layer-segmentation task, the proposed architecture achieved a mean absolute error of 1.305{\\ensuremath{\\pm}}0.547 pixels compared to 1.967{\\ensuremath{\\pm}}0.841 and 2.166{\\ensuremath{\\pm}}0.886 for the baseline methods.},\n}"}, "submission_cdate": 1544727250092, "submission_tcdate": 1544727250092, "submission_tmdate": 1561399319998, "submission_ddate": null, "review_id": ["S1er6eKszN", "BylRFzWnQV", "HylCERfcXE"], "review_url": ["https://openreview.net/forum?id=Hkx5C9QeeN&noteId=S1er6eKszN", "https://openreview.net/forum?id=Hkx5C9QeeN&noteId=BylRFzWnQV", "https://openreview.net/forum?id=Hkx5C9QeeN&noteId=HylCERfcXE"], "review_cdate": [1547567293016, 1548649093821, 1548525110207], "review_tcdate": [1547567293016, 1548649093821, 1548525110207], "review_tmdate": [1550002704240, 1548856748673, 1548856734881], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper90/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper90/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper90/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["Hkx5C9QeeN", "Hkx5C9QeeN", "Hkx5C9QeeN"], "review_content": [{"pros": "This study analyzes a novel network architecture for the segmentation of objects with a lower dimension than the input data. In medical images this corresponds to hyperplanes in 3D images or lines from 2D images. Analysis of the network was performed in 2 use-cases: segmentation of retinal layers in OCT B-scans and segmentation of A-scans containing geographic atrophy from 2D bscans.\n\nThe quality, clarity, and originality of this work is good.\n\nThe paper is very well-written and very clear. In particular, Figure 1 is excellent in simultaneously describing three different network architectures in a concise manner; very well done! Figure 2 is also exceptionally done. \n\nThe motivation for the work is clear and the results are described in two very relevant applications.\n\nI have not seen a similar network architecture and believe it to be unique. The network architecture is also very relevant to the task at hand with few arbitrary design decisions. It is unclear why the number of iterations was fixed rather than allowing to optimize on the validation network, but I commend including all data necessary to replicate the study.\n\nOverall, the network architecture is novel, the experimental design is mostly well-done. Nice paper.", "cons": "There are some claims made in the paper that are questionable. These claims do not affect the acceptance of the paper, but should be addressed prior to final publishing.\n\n\"Neither classification networks nor segmentation networks are suitable for these tasks [tasks being segmentation of 1D lines in 2D image]\". I understand the point that this narrative is trying to deliver, and believe that the narrative should be in there, but as written the text is untrue. In the narrow definition of classification and segmentation networks provided, this would be true. However, the definition provided misses a wide range of published networks that do not fit the criteria and are able to segment 1D lines from 2D data. In the paper, base model 1 is able to segment 1D lines from 2D images and is very similar to AlexNet. Examples of this have been published in the context of retinal layer segmentation as well: \"Shah et al. Multiple surface segmentation using convolution neural nets: application to retinal layer segmentation in OCT images\".\n\nIn the Results section comparing algorithms for GA segmentation, there is a comparison of Dice scores compared across OCT-volumes in the dataset. A single Dice score was calculated per OCT volume. The proposed model had a mean and std dev of 0.49 +/- 0.21 while base model 1 had a mean and std dev of 0.46 +/- 0.22. The sample size, in number of volumes, is 20. The next sentence indicates that the proposed model is significantly better with a p-value < 0.01. I do not see how this can be true. I am not a statistic expert, but comparison of two algorithms on the same OCT volumes could use a paired student t-test. Since I do not have the paired data, an unpaired t-test gives a p-value of 0.66. It is possible that each B-scan, or even each A-scan, was used to calculate statistical significance, but that would not be the correct approach as well as the data within a single volume would be highly correlated. More information is required on how statistical significance was calculated.", "rating": "4: strong accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "oral_presentation": ["Consider for oral presentation"]}, {"pros": "1. The method introduces the idea of funneling sub networks which is a novel way to deal with the dense segmentation problem\n2. Experimental results are well validated", "cons": "1. The description of subnetworks could have been more elaborate as I did not fully grasp its advantage and architecture nuances", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "oral_presentation": ["Consider for oral presentation"]}, {"pros": "The paper proposes a novel CNN architecture for dense segmentation in reduced dimensions with applications to OCT images. The architecture contains a series of downsampling layers with residual connections in an encoder-decoder fashion with funneling subnetwork providing a global and local context for dense segmentation. The authors illustrate the use of the proposed architecture on segmentation of geographic atrophy and retinal layers. The results from the experiments indicate a significant improvement over the baseline methods.\n\nPros:\n1. Novel CNN architecture for boundary extraction in OCT images. \n2. The method is evaluated for segmentation (GA) and regression (retinal layers) tasks.\n3. The results from the experiment indicate a 3% and 21%(Dice) performance improvement over the two baseline approaches. The method also shows a similar superior performance for the other application (GA segmentation).\n4. Figure 4. is helpful as it shows how the baseline works fail to segment retinal layers around the drusen. \n", "cons": "Minor comments:\n\n1. The dataset for layer segmentation application contains 115Normal and 269 AMD samples. The training set includes only 5 Normal samples vs. 159 AMD samples. What is the reason for choosing the training set with this class imbalance? \n2. The performance of the model (MSE)  training with very few Normal samples has better performance compared to AMD samples. This performance difference could be highlighted in the discussion section.\n3. It would be good if the authors could clarify the number of parameters for the proposed model and the two different baselines.", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct", "special_issue": ["Special Issue Recommendation"]}], "comment_id": ["H1gmDRnY4E", "BJlUMxTY4N", "Ske1L7TFVE"], "comment_cdate": [1549549147098, 1549549582172, 1549550406970], "comment_tcdate": [1549549147098, 1549549582172, 1549550406970], "comment_tmdate": [1555946021135, 1555946020923, 1555946020705], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper90/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper90/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper90/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "No Title", "comment": "Thank you for your feedback. Unfortunately, we were restricted by the page limit to provide a more elaborate description of the network architecture. We will however consider more in depth discussion of this topic, possibly with the help of additional figure(s) in follow up work. Furthermore, we hope to be able to explain the architectural nuances more clearly during the conference."}, {"title": "No Title", "comment": "Thank you for your feedback. \nThe separation of the original data set into training/validation and test was done in such a way that an equal number of AMD and normal cases would be present in the test set (this was mirrored in the validation set). This seemed to be the best choice for performance comparison between the two subsets. The reason to include such a low number of normal cases in the training set, is that we suspect that adding more normal cases would provide little benefit. Note that also for the AMD cases, there may be parts of the retina where its normal structure is maintained, so a model that is able to solve the more complicated task of segmenting layers in AMD cases, is likely to also yield good performance in case of absence of abnormalities. This also explains why the models perform better in the normal cases than the AMD cases.\n\nThe number of trainable parameters in the different models is: 8,144,067 for base1, 10,943,651 for base2 and 35,744,131 for the proposed model. "}, {"title": "No Title", "comment": "Thank you for your feedback.\n\nWe agree that the chosen formulation in the introduction was inapt. The intention was indeed to highlight the limitations of out-of-the-box application of well-known architectures to the problem at hand. We will rephrase this part.\n\nRegarding the statistics in the result section: it may indeed be confusing or counter-intuitive that with such a high standard-deviation, the p-value turns out so low. The reason for this is that, even though there are large differences in dice scores between volumes, for an individual volume, the model actually quite consistently performs slightly better than the base models. The p-value is indeed obtained using a paired t-test (this will be added to the document for clarity). For reference, all dice scores for the different models in the test set will be given here. There were actually 3 cases with no reference GA, so here no dice score was computed and they were left out. These are the dice scores for the remaining 17 cases:\n\ndice scores\n #  model  base1  base2  best\n------------------------------\n 1  0.77   0.74   0.73   model\n 2  0.37   0.28   0.00   model\n 3  0.67   0.65   0.60   model\n 4  0.53   0.49   0.01   model \n 5  0.63   0.69   0.00   base1\n 6  0.43   0.42   0.04   model \n 7  0.82   0.79   0.62   model\n 8  0.39   0.33   0.53   base2\n 9  0.34   0.32   0.36   base2 \n10  0.57   0.53   0.48   model \n11  0.25   0.23   0.19   model\n12  0.73   0.72   0.59   model \n13  0.73   0.58   0.48   model\n14  0.28   0.25   0.01   model\n15  0.00   0.00   0.00   base1\n16  0.37   0.26   0.09   model\n17  0.53   0.34   0.00   model\n\n      mean \u00b1 std\n------------------\nmodel  0.49 \u00b1 0.21\nbase1  0.45 \u00b1 0.22\nbase2  0.28 \u00b1 0.27\n\nAnd these are the resulting p-values:\n\nUsing: scipy.stats.ttest_rel\n\n    Calculate the T-test on TWO RELATED samples of scores, a and b.\n\n    This is a two-sided test for the null hypothesis that 2 related or\n    repeated samples have identical average (expected) values.\n\n    Parameters\n    ----------\n    a, b : array_like\n        The arrays must have the same shape\n\n                 p-value\n------------------------\nmodel <> base1   0.00574\nmodel <> base2   0.00083\n"}], "comment_replyto": ["BylRFzWnQV", "HylCERfcXE", "S1er6eKszN"], "comment_url": ["https://openreview.net/forum?id=Hkx5C9QeeN&noteId=H1gmDRnY4E", "https://openreview.net/forum?id=Hkx5C9QeeN&noteId=BJlUMxTY4N", "https://openreview.net/forum?id=Hkx5C9QeeN&noteId=Ske1L7TFVE"], "meta_review_cdate": 1551356572181, "meta_review_tcdate": 1551356572181, "meta_review_tmdate": 1551881973646, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "The work addresses a gap in the state of the art in addressing 2D->1D semantic segmentation problems, and in general the scenarios where the output dimension is smaller than the input one. There is a clear applicability of this approach outside the domain of retinal OCT and thus expected to be of interest to MIDL community. The reviewers and AC agree that the work and the proposed network are original and the experiments of good quality.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=Hkx5C9QeeN&noteId=rJeNjMLr8V"], "decision": "Accept"}