AMSR / conferences_raw /midl19 /MIDL.io_2019_Conference_BkeZySSxlE.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame
No virus
24.8 kB
{"forum": "BkeZySSxlE", "submission_url": "https://openreview.net/forum?id=BkeZySSxlE", "submission_content": {"title": "CNN-based segmentation with a semi-supervised approach for automatic cortical sulci recognition", "authors": ["L\u00e9onie Borne", "Jean-Fran\u00e7ois Mangin", "Denis Rivi\u00e8re"], "authorids": ["leonie.borne@cea.fr", "jean-francois.mangin@cea.fr", "denis.riviere@cea.fr"], "keywords": ["CNN", "segmentation", "semi-supervision", "cortical sulci"], "abstract": "Despite the impressive results of deep learning models in computer vision, these techniques have difficulty achieving such high performance in medical imaging. Indeed, two challenges are inherent in this domain: the rarity of labelled images, while deep learning methods are known to be extremely data intensive, and the large size of images, generally in 3D, which considerably increases the need for computing power. To overcome these two challenges, we choose to use a simple CNN that tries to classify the central voxel of a 3D patch given to it as an input, while exploiting a large unlabelled database for pretraining. Thus, the use of patches limits the size of the neural network and the introduction of unlabelled images increases the amount of data used to feed the network. This semi-supervised approach is applied to the recognition of the cortical sulci: this problem is particularly challenging because it contains as many structures to be recognized as labelled subjects, i.e. only about sixty, and these structures are extremely variable. The results show a significant improvement compared to the BrainVISA model, the most used sulcus recognition toolbox.", "pdf": "/pdf/ab99b9fc12347f45f67cbbd411d32cb2603e50c2.pdf", "code of conduct": "I have read and accept the code of conduct.", "paperhash": "borne|cnnbased_segmentation_with_a_semisupervised_approach_for_automatic_cortical_sulci_recognition"}, "submission_cdate": 1544733913084, "submission_tcdate": 1544733913084, "submission_tmdate": 1545069833353, "submission_ddate": null, "review_id": ["S1gIl-i2mN", "HyxEPpj2X4", "B1g-bZC27V", "ryeugtChmE"], "review_url": ["https://openreview.net/forum?id=BkeZySSxlE&noteId=S1gIl-i2mN", "https://openreview.net/forum?id=BkeZySSxlE&noteId=HyxEPpj2X4", "https://openreview.net/forum?id=BkeZySSxlE&noteId=B1g-bZC27V", "https://openreview.net/forum?id=BkeZySSxlE&noteId=ryeugtChmE"], "review_cdate": [1548689645548, 1548692827733, 1548701945232, 1548703984272], "review_tcdate": [1548689645548, 1548692827733, 1548701945232, 1548703984272], "review_tmdate": [1548856701828, 1548856700334, 1548856696814, 1548856695435], "review_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper101/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper101/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper101/AnonReviewer4"], ["MIDL.io/2019/Conference/Paper101/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["BkeZySSxlE", "BkeZySSxlE", "BkeZySSxlE", "BkeZySSxlE"], "review_content": [{"pros": "They use deep learning in a new dataset and suggest a method to make use of unlabelled data.", "cons": "\nI do not think the contribution is strong enough for the a paper to be accepted in the conference. They use an old method (classify the centre pixel of a patch in the image) with the idea that it will use less memory and expand the data available since it will be as many samples as available voxels. However, this is unlikely the case. More modern approach like a 3D U-net (\u00d6 \u00c7i\u00e7ek et al. \u200e2016) should fit in a modern consumer grade GPU like 1080Ti. The \"extra\" data will be highly redundant since neighboring pixels are basically the same patch, there therefore a lot of the operations during training will be redundant. It would be better do data augmentation (e.g. random shifts and rotations). If they are concerned about sample imbalance they could just weight the loss for each pixel according to the sample weight. They never describe they model properly. Is it exactly a LeNet network? What backend did they use to train it? Why they didn't use batch normalization? A supplementary material with more information will be needed.\n\nThe ground truth confuse me. They compare with the BrainVISA model, but at the same time they use this model to extract candidate regions. If that the case, they are rather using a sort of ensemble of models. It would be impossible for the deep learning model to do worse than BrainVISA since the pixels are first classified by the later.\n\nI find problems in the statistics they use to claim an improvement in their model. They mention a p-value of 2.15e \u2212 26. With the numbers shown table 1 I would assume the only way to get those p-values is by having thousands of independent samples. Since they total labelled sample only consist in ~60 individuals I do not understand those statistics. Are they consider each individual sulci in each individual as a different sample? Are they doing multiple hypothesis correction? Since their model only considers positive or unknown labels (not the sulci identity) I think the correct approach will be to pool the results from all the voxels. If they want a standard deviation they should have use cross-validation.\n\nAdditionally, the english and presentation need to be polished. The text is at times informal and unclear. The black highlightings at times do not indicate relevant information. The figure 2 seems that it might be important, but it is not clearly stated why. There titles that are abbreviations and those abbreviations that are never explained (e.g. ESI).", "rating": "1: strong reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "A semi-supervised approach was proposed for training a convolutional neural network for automatic cortical sulci segmentation. The benefit of pretraining and regularization was shown. ", "cons": "The training and evaluation approach is unclear. First it is stated that a leave-one-subject-out cross-validation scheme is used. Then a propagation of the training ground-truth labels to 10 other brains via Voronoi-diagram is mentioned for measuring error rates. The propagation method itself is insufficiently described. What errors are introduced by this propagation? And why is this propagation needed for evaluation instead of evaluation the performance on the manual labels from the test data?\n\nThe method was only compared to the BrainVISA model, which is suboptimal, while stating in the conclusion \"shows the power of the CNNs compared to the methods developed so far\". The authors have previously proposed two other methods (Borne et al., 2018, Perrot et al., 2011), which performed better than BrainVISA. As the dataset was changed, performance cannot directly be compared with these previous methods. For the reader to know the benefit of the currently proposed method, authors should include a comparison on the same dataset with the state-of-the-art method (the better one of Borne et al., 2018 and Perrot et al., 2011).\n\nAll used parameters (e.g. number of neighbours, BrainVISA configuration) need to be included to allow reproduction of the method.\n\nThe evaluated method names should be stated in the text of 4.1 and in the methods section to ensure readers can follow what is compared. Why are the p-values when comparing BrainVISA to CNN+pretrain+reg (3% difference) larger or similar than when comparing BrainVISA to CNN (1% difference)? Also the p-values are very small for a difference of 1% and 62 test subjects.\n", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "(this was done as an emergency review, and won't be as detailed as it could be)\n\nThe paper is about cortical sulci segmentation, performed with CNNs. The problem and the specifics of this task are well explained and motivated.\n\nThe method is performed in several steps. First, a neural network is trained on annotated data (62 patients). Then, inference is run on 500 un-annotated patients. Those predictions are then used to train a new network, which is then fine-tuned on the original patients. Since BrainVISA is extensively used to either select the voxels to labels or to regularize the results, the process can not be called end-to-end.\n\nFor performances reasons, segmentation is not performed on the whole 3D volume, but on a list of voxels (with their neighbor patches) selected from BrainVISA. This divides the number of voxels to classify by 1000. The authors then used a modified LeNet for 3D to classify each voxel. ", "cons": "\u00abDespite the impressive results of deep learning models in computer vision, these techniques have difficulty achieving such high performance in medical imaging. \u00bb\nThis is a really bold statement to start a paper, one which is objectively wrong. This might indicate a lack of awareness of the state-of-the-art by the authors. If you refer only to this specific task, this should be updated to reflect that. \n\nI am concerned about the use of BrainVISA to select which voxels should be classified, as it introduces the bias of this imperfect tool into the training process. On top of that, even a trained network will need it as a pre-processing to perform inference, which is not ideal.\nI am not even convinced this is really needed, especially with such a lightweight network ; GPUs made great progresses in recent years in memory/parallel capabilities. Training time with only 62 patients is usually not really a concern, we are not dealing with the millions of images found in natural images datasets. I would like to see a baseline of an end-to-end trained 3D-CNN, and then compare your method to it.\n\nIt is mentioned that at each epoch, only 100 points are randomly selected per subject for training. Why ? Why not use all the data available ? Is this some weird kind of data augmentation ?\n\nThe cross-entropy is actually not a great loss function for unbalanced tasks, as least in his unweighted version. There is also some other works on specific losses for unbalanced tasks, such as:\n- Sudre, Carole H., et al. \"Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations.\" Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Springer, Cham, 2017. 240-248.\n- Milletari, Fausto, Nassir Navab, and Seyed-Ahmad Ahmadi. \"V-net: Fully convolutional neural networks for volumetric medical image segmentation.\" 3D Vision (3DV), 2016 Fourth International Conference on. IEEE, 2016.\nV-Net could actually be a good baseline for this paper. \n\nThe strategy used for the semi-supervision is usually referred as proposals. In this case, the proposals are refined using BrainVISA. Some related works that might be interesting to acknowledge and maybe compare to:\n- Rajchl, Martin, et al. \"Deepcut: Object segmentation from bounding box annotations using convolutional neural networks.\" IEEE transactions on medical imaging 36.2 (2017): 674-683.\n- Papandreou, George, et al. \"Weakly-and semi-supervised learning of a deep convolutional network for semantic image segmentation.\" Proceedings of the IEEE international conference on computer vision. 2015.", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "This paper proposes a method to use a large amount of unlabeled dataset for the cortical sulci recognition.", "cons": "1. It is not so clear whether the authors use the same architecture for the first pre-training and for the second fine-tuning network. If not, it should be clarified.\n2. In 2.2.3, the pre-training model is trained after only 15 epochs. Is it enough? Was this model already converged? Also, does \u201cpoints\u201d mean patches or voxels? It\u2019s not clear.\n3. In 3.1., the authors mentioned that four additional sulci were used compared to the previous paper. In previous paper, 63 and 62 sulci were used for left and right, respectively. In this paper, 64 and 63 sulci were used for left and right, respectively. How does this become four additional sulci?\n4. In 4.1, the p values are strange. It seems that the final model shows the best results, but why is the p-value higher than the ones for the other methods?\n5. In figure 2, what does the blue bar mean? According to the caption, there should be only violet and pink bars.\n\nMinor comments:\n1. Voxel resolution should be mm^3, not mm.\n2. Please represent the measures Elocal and ESI with subscript such as E_{local} and E_{SI}.\n", "rating": "2: reject", "confidence": "1: The reviewer's evaluation is an educated guess"}], "comment_id": ["HJxULSGsNV", "HklqYrGj4N", "B1et3rGs4E", "HygDAHMi4E"], "comment_cdate": [1549636941551, 1549636994073, 1549637040913, 1549637070650], "comment_tcdate": [1549636941551, 1549636994073, 1549637040913, 1549637070650], "comment_tmdate": [1555946010010, 1555946009789, 1555946009534, 1555946009293], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper101/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper101/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper101/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper101/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "AnonReviewer1", "comment": "We thank the reviewer for raising several points that were not presented clearly enough in this study. Here are some clarifications. \n1. The same architecture was used for the first pre-training and the second fine-tuning.\n2. During our preliminary manipulation, 15 epochs seemed to us to be more than enough to converge, so we used it for this study. In the long run, however, we want to set up an early stopping strategy to limit unnecessary computations.\n\"Points\" means \"voxels\".\n3. In (Perrot et al. 2011), the ventricle of the right and left hemisphere are counted as two sulci whereas anatomically speaking, these structures are not sulci. In this study we have chosen not to take them into account, as explained on p.6. Thus, compared to (Perrot et al. 2011), we have 63 + 62 sulci - 2 ventricles + 4 new sulci, that is 64 + 63 sulci.\n4. Using the example of the ESImean, the p-values presented compare CNN and CNN+pretrain (about 1e-26), CNN+pretrain and CNN+pretrain+reg (about 1e-75) and BrainVISA and CNN+pretrain+reg (about 1e-14). If we also calculate the p-values for BrainVISA and CNN (p-value greater than 0.05) and for BrainVISA and CNN+pretrain (about 1e-7), then the p-value is actually better for CNN+pretrain+reg than for the other models.\n5. The blue bar must match the purple bar because there is indeed no blue bar. We can change the color of this figure to make the legend clearer."}, {"title": "AnonReviewer4", "comment": "First of all, we apologize for our clumsy statements/explanations and we thank the reviewer for his comments and references that allow us to take a step back on this study.\n\n1. \u00abDespite the impressive results of deep learning models in computer vision, these techniques have difficulty achieving such high performance in medical imaging. \u00bb This first sentence is indeed clumsy and does not reflect our attention. What we want to highlight in this article is the success of applying a simple neural network for this specific application.\n2. Concerning the use of BrainVISA to extract the voxels to be labelled, this is first of all essential for the manual labelling of brains: indeed, without the 3D visualization of the brain and its sulci, it is almost impossible to make a labelling without visualizing the relative position of the sulci between them, their depth, their length, etc. Secondly, the representation obtained thanks to BrainVISA preprocessing makes it easy to calculate several measurements (length, depth, etc.) that are particularly used in morphometric studies. Thirdly, the BrainVISA pipeline is mainly open to criticism with regard to the cutting of the sulci skeleton into elementary folds (which are not used in post-processing here, unlike the previous methods), while the extraction of the skeleton is robust. Finally, the use of this pipeline simplifies the work of the neural network, which can focus on sulci labelling without also having to learn how to segment them.\nHowever, the justification for the use of the pipeline should indeed include the use as a baseline of an end-to-end network managing both segmentation and labelling.\n3. Concerning the advances in GPU capabilities, we only have 62 subjects, but the images are very large and since each voxel is actually an example to classify, if we consider all the voxels of a single image (without going through the BrainVISA pipeline), we are already well over a million examples. One of the solutions to overcome this patch approach would indeed be to use a 3D-CNN such as V-net or 3D-UNet, and we are indeed testing these methods, but we did not yet have the results when we wrote this study. For information, the 3D-UNet currently flirts with the performance of the method presented here, but does not exceed it.\n4. Concerning the 100-1000 points selected, this number of points was initially chosen to accelerate learning because of the limited capacities of our computers. In addition, due to the redundancy of data associated with this patch approach, this did not seem to have a particular influence on the results.\n5. Regarding the unbalanced task, we thank you very much for the references provided, we will try to take them into account later. For this application, however, the largest sulci are also the most used for morphological studies and therefore need to be the best recognized, which is why we did not address the problem here.\n6. Finally, for the semi-supervised strategy, we also thank you for the references provided, we will also try to enrich our method with these approaches."}, {"title": "AnonReviewer3", "comment": "We apologize to the reviewer for our lack of clarity on the points raised. We will try to clarify these misunderstandings in this response.\n\n1. Concerning the propagation step we should have insisted on the fact that ground-truth training is not propagated on 10 new brains but on 10 segmentations of the same brain: the same image was used for the segmentation of the manual labelling and the other 10 segmentations. The error measurement on these 10 segmentations allows to take into account the variability of the segmentation pipeline (in particular the cutting into elementary folds) and therefore the robustness of the CNN to this variability, which is not possible based only on the segmentation of the manual labelling.\n2. Regarding the comparison with the BrainVISA model, the BrainVISA model corresponds to the model described in (Perrot et al., 2011), we should have been clearer on this point. However, it is true that the model was not compared to the model described in (Borne et al., 2018), which is slightly but significantly better than (Perrot et al., 2011). Indeed, (Borne et al., 2018) has not been applied to the new database at this time. However, what we wanted to highlight in this article is the robustness of the CNN to errors in cutting into elementary folds thanks to voxel-level labelling, which was not mentioned either in (Borne et al., 2018) or in (Perrot et al., 2011).\n3. Concerning the values of the parameters used, we should indeed have been more exhaustive to allow the method to be reproduced, we apologize on this point. The same applies to the name of the methods used.\n4. Regarding p-values, we should have been more precise when we list them: the p-values presented compare CNN and CNN+pretrain (about 1e-26), CNN+pretrain and CNN+pretrain+reg (about 1e-75) and BrainVISA and CNN+pretrain+reg (about 1e-14). If we also calculate the p-values for BrainVISA and CNN (p-value greater than 0.05) and for BrainVISA and CNN+pretrain (about 1e-7), then the p-value is actually better for CNN+pretrain+reg than for the other models.\nThe small values of these p-values also seemed strange to us in view of the low percentage of improvement, we assume that this is probably due to the fact that the test is matched and that each subject is slightly improved compared to the previous method."}, {"title": "AnonReviewer2", "comment": "We apologize to the reviewer for the lack of clarity on some points. We'll try to make up for it here.\n\n1. Concerning the reproaches made about the age of this approach, we would like to point out that this study was a first attempt to use the CNNs, which was intended to be easy to implement and train and which showed results exceeding all expectations in view of the problem at hand. Of course we are considering using more modern models, as mentioned at the end of the paper, and we are currently testing them.\n2. For the data augmentation, as rotations and random shifts seem to us to be too strong transformations, we are indeed planning to noise the initial image to obtain variable segmentations, then label them as it was done for the error calculation, in order to train the model on these data.\n3. The model used is not exactly LeNet since it has been adapted for 3D image processing, but it does have the same number of hidden layers, the same kernel sizes, etc. In order to optimize the size of the article, we have chosen not to dwell on it. The backend used is the Pytorch library. We did consider using batch normalization later on.\n4. Concerning the preprocessing performed by BrainVISA, we were indeed not clear on this point: preprocessing makes it possible to extract the sulci skeleton, but once this segmentation has been performed, the real challenge lies in the labelling of the skeleton's voxels. BrainVISA proposes an algorithm for this (Perrot et al., 2011) and we propose another one here.\n5. Concerning the p-values, we were also surprised that they were so low, but we have difficulty understanding the reproaches made. Here are some clarifications: for each method, scores are calculated for both hemispheres of each subject, so we have ~120 scores following the leave-one-subject-out. The 2*~120 scores are then compared using the python function scipy.stats.ttest_rel, that calculates the T-test on two related samples of scores. If this calculation is wrong, we are sorry, but we cannot otherwise justify the values obtained."}], "comment_replyto": ["ryeugtChmE", "B1g-bZC27V", "HyxEPpj2X4", "S1gIl-i2mN"], "comment_url": ["https://openreview.net/forum?id=BkeZySSxlE&noteId=HJxULSGsNV", "https://openreview.net/forum?id=BkeZySSxlE&noteId=HklqYrGj4N", "https://openreview.net/forum?id=BkeZySSxlE&noteId=B1et3rGs4E", "https://openreview.net/forum?id=BkeZySSxlE&noteId=HygDAHMi4E"], "meta_review_cdate": 1551356550879, "meta_review_tcdate": 1551356550879, "meta_review_tmdate": 1551703102863, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "Paper 101 - Rejection, due to issues on clarity, lack of methodological novelty\n\nThis paper proposes semi-supervision of sulcii classification. \n\n* R#1 has mostly minor technical questions on CNNs - \"an educated guess\"\n\nAuthors correctly addresses these minor concerns.\n\n* R#4 has concerns on awareness of recent state-of-the-art in cnn's use in medical imaging, practical value of the method, evaluation framework\n\nAuthors acknowledges \"clumsiness\" and mentions the performance of comparable 3D-cnn versions appears equivalent to the proposed approach. \n\nAuthors genuinely thank R#4 for the provided references. \n\n* R#3 raises issues on the training and evaluation setups (comparative framework)\n\nAuthors acknowledges lack of clarity in the comparative study.\n\n* R#2 questions the methodological novelty, has technical questions, and concerns on the evaluation setup with brainvisa.\n\nAuthors acknowledges lack of clarity, but fail in addressing novelty and explaining their low statistical scores. \n\nConclusion:\n\nReviewers recommendation are reject-reject-strong reject - mostly due to clarity, lack of methodological novelty. All but one are confident on their decisions.\n\nGlobal recommendation towards Rejection\n\n\n", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=BkeZySSxlE&noteId=r1x1qMLSU4"], "decision": "Reject"}