AMSR / conferences_raw /midl20 /MIDL.io_2020_Conference_B_NG9y_wqU.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
22.9 kB
{"forum": "B_NG9y_wqU", "submission_url": "https://openreview.net/forum?id=73rByVsMJu", "submission_content": {"authorids": ["chenhantsai@mail.tau.ac.il", "nk@eng.tau.ac.il", "eli.konen@sheba.health.gov.il", "iris.eshed@sheba.health.gov.il", "arnaldo.mayer@sheba.health.gov.il"], "abstract": "Magnetic Resonance Imaging (MRI) is a widely-accepted imaging technique for knee injury analysis. Its advantage of capturing knee structure in three dimensions makes it the ideal tool for radiologists to locate potential tears in the knee. In order to better confront the ever growing workload of musculoskeletal (MSK) radiologists, automated tools for patients' triage are becoming a real need, reducing delays in the reading of pathological cases. In this work, we present the Efficiently-Layered Network (ELNet), a convolutional neural network (CNN) architecture optimized for the task of initial knee MRI diagnosis for triage. Unlike past approaches, we train ELNet from scratch instead of using a transfer-learning approach. The proposed method is validated quantitatively and qualitatively, and compares favorably against state-of-the-art MRNet while using a single imaging stack (axial or coronal) as input. Additionally, we demonstrate our model's capability to locate tears in the knee despite the absence of localization information during training. Lastly, the proposed model is extremely lightweight ($<$ 1MB) and therefore easy to train and deploy in real clinical settings.", "paper_type": "both", "TL;DR": "We present the Efficiently-Layered Network (ELNet), a convolutional neural network (CNN) architecture optimized for the task of initial knee MRI diagnosis for triage.", "authors": ["Chen-Han Tsai", "Nahum Kiryati", "Eli Konen", "Iris Eshed", "Arnaldo Mayer"], "track": "full conference paper", "keywords": ["Knee Diagnosis", "MRI", "Deep Learning", "ACL Tear", "Meniscus Tear", "Knee Injury", "Medical Triage"], "title": "Knee Injury Detection using MRI with Efficiently-Layered Network (ELNet)", "paperhash": "tsai|knee_injury_detection_using_mri_with_efficientlylayered_network_elnet", "pdf": "/pdf/72f170d7cf4cfc138a391576f383b8e1ca75f5b9.pdf", "_bibtex": "@inproceedings{\ntsai2020knee,\ntitle={Knee Injury Detection using {\\{}MRI{\\}} with Efficiently-Layered Network ({\\{}ELN{\\}}et)},\nauthor={Chen-Han Tsai and Nahum Kiryati and Eli Konen and Iris Eshed and Arnaldo Mayer},\nbooktitle={Medical Imaging with Deep Learning},\nyear={2020},\nurl={https://openreview.net/forum?id=73rByVsMJu}\n}"}, "submission_cdate": 1579955681062, "submission_tcdate": 1579955681062, "submission_tmdate": 1588418659612, "submission_ddate": null, "review_id": ["5rdTfmciFg", "5JZxmvW70a", "fkvO-8tfNA"], "review_url": ["https://openreview.net/forum?id=73rByVsMJu&noteId=5rdTfmciFg", "https://openreview.net/forum?id=73rByVsMJu&noteId=5JZxmvW70a", "https://openreview.net/forum?id=73rByVsMJu&noteId=fkvO-8tfNA"], "review_cdate": [1584140447804, 1584134664759, 1584124230502], "review_tcdate": [1584140447804, 1584134664759, 1584124230502], "review_tmdate": [1585229841135, 1585229840353, 1585229839552], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2020/Conference/Paper117/AnonReviewer1"], ["MIDL.io/2020/Conference/Paper117/AnonReviewer2"], ["MIDL.io/2020/Conference/Paper117/AnonReviewer4"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B_NG9y_wqU", "B_NG9y_wqU", "B_NG9y_wqU"], "review_content": [{"title": "Knee Injury Detection using MRI with Efficiently-Layered Network (ELNet)", "paper_type": "validation/application paper", "summary": "The authors propose a lightweight CNN model (< 1 MB) for locating potential tears in the knee on MRI images. The main contributions are two normalization layers (layer and contrast normalization) for 3D sub-images and the application of BlurPool downsampling. Promising results are shown on two knee datasets.", "strengths": "The paper is well written and easy to follow.\n\nEven though the proposed model is lightweight (0.2M), it is shown to be on par or better than a recently published model called MRNet (183M parameters).\n\nThe selected application (discovering knee tear) seems to be clinically relevant.\n\n\n", "weaknesses": "It is not entirely clear how crucial the proposed multi-slice normalization and BlurPool layers are. An ablation study and comparison to established methods like batch normalization would have been valuable.", "questions_to_address_in_the_rebuttal": "I appreciate that an experienced board-certified MSK radiologist was asked to identify the most informative slice. How was \"most informative\" defined? What is the performance (quantitatively) of ELNet on finding this slice?\n\nWhat is the intution/formal definition of MCC and why is it preferred compared to the ROC-AUC measure?\n\nWhy was the normalization performed in the slice direction only? Could it be of value in other spatial directions (e.g., in plane)?\n\nWhich statistical tests were applied to show statistical significance for the numbers in bold of Table 2?\n\nWill the source code be made publicly available upon publication?\n", "detailed_comments": "The sub-figures in Figure 2 could be explained in more detail. What do the different colors and arrows mean for instance?", "rating": "3: Weak accept", "justification_of_rating": "The method adopts approaches from the literature (instance normalization, BlurPool) and applies them to the problem of knee tear detection on 3D MRI data. \n\nThe paper is well written but would benefit from an ablation study to better understand the value of the individual layers in comparison to the standard approach using batch normalization. \n\nThe results and model size of the proposed approach are enticing.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct", "recommendation": ["Poster"], "special_issue": "no"}, {"title": "A Model for Diagnosing Knee Pathologies ", "paper_type": "both", "summary": "In this work, the authors purposed a new deep neural network architecture for detecting injuries/abnormalities in the knee. The main contribution of the work was adding a normalization step to the network, and learning the affine transformation parameters during the training. The normalization was followed by a BlurPool layer to solve the shift variance. ", "strengths": "The paper is written very well, the implementation details are provided to help reproducing the results. \nThe method was tested on two different datasets, which is impressive. The results of the model was compared also to the state of the art.", "weaknesses": "From the following sentence, I understand that for each pathology, a different model was trained. If this is true, the model is not efficient. \n\u201cContrast normalization yielded the best results for detecting meniscus tears, and layer normalization for detecting the remaining pathologies.\u201d", "questions_to_address_in_the_rebuttal": "Please provide quantification for this sentence:\n\u201cIn the majority of the cases provided, ELNet was able to indicate the most informative slice and has generated a heatmap that coincided with the region reported by the radiologist.\u201d\n\nWhat was the reason for choosing different values for K foe two datasets? Did these values obtained using cross validation? ", "detailed_comments": "This sentence is not completely true: \u201ca model with higher sensitivity is always preferred since better detection of true positives has always been the goal of automated diagnosis algorithms.\u201d \nA good model should have a high sensitivity as well as specificity.\n", "rating": "3: Weak accept", "justification_of_rating": "The algorithm was explained very well. The results are also very nice. However, if different models were trained for predicting each parameter, not only training but also prediction would not be efficient.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct", "recommendation": ["Oral"], "special_issue": "yes"}, {"title": "Method is well explained and the experiment for comparing with state-of-the-art is complete, but ablation study lacks.", "paper_type": "validation/application paper", "summary": "The paper proposed an interesting method (ELNet) to diagnose anterior cruciate ligament for knee MRI. By using multi-slice normalization and BlurPool. The cross-validation experiments for 2 different datasets show good improvement from previous state-of-the-art method. Hyper-parameter search is complete to get the best proposed model. But ablation study for multi-slice normalization and BlurPool lacks.", "strengths": "The paper is well written and describes an interesting and relatively novel approach to solving knee diseases.\nThe methods are well explained and results are well compared to previous state-of-the-art.\nparameters are well searched to get the highest performance.\nThe key contribution is:\nDifferent normalizaltion methods are used for different diseases to boost the performance. \nBlurPool is used for the network.\nThe proposed network achieves higher AUC and MCC.\n", "weaknesses": "1. The purpose of BlurPool how it improves the model is unclear. BlurPool is a pre-defined 3x3 kernel following by trided down-sampling, which may have been well used in the backbone. Ablation study lacks.\n2. The proposed network applies different normalization for different diseases, but no results support the fact the for some disease, one kind of normalization is better than the other.\n3. Lack of novelty. BlurPool is the only novelty of this paper, and layer-normalization/contrast-normaliztion acts more as a normalization search for different diseases.", "questions_to_address_in_the_rebuttal": "1. The purpose of BlurPool. Maybe I missed, I didn't see the explanation about the purpose of BlurPool. \n3. Lack of novelty. The is mainly because multi-slice normalization uses 2 existing normalization methods and it is trivial how different normalization method for different disease is chosen. ", "rating": "2: Weak reject", "justification_of_rating": "The result shows good improvement for knee diseases in MRI. Two different datasets are evaluated. Hype-parameters are well searched, but more importantly, the paper lacks ablation study for the proposed two novelty. And It is not convincing to take into account multi-slice normalization as novelty.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct", "recommendation": [], "special_issue": "no"}], "comment_id": ["5_3tN2yBmvf", "aP07o7Ak03-", "w9A7mkSnR6s"], "comment_cdate": [1585059973786, 1585060103789, 1585060207800], "comment_tcdate": [1585059973786, 1585060103789, 1585060207800], "comment_tmdate": [1585229841713, 1585229840869, 1585229840081], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2020/Conference/Paper117/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper117/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper117/Authors", "MIDL.io/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Re: Knee Injury Detection using MRI with Efficiently-Layered Network (ELNet) ", "comment": "We would like to thank the reviewer for a detailed review of our work.\n\nAs the reviewer suggested, we will add an ablation study to our final revision that investigates the effect of Batch Normalization compared to our proposed multi-slice normalizations and the effects of Max-Pool compared with BlurPool performance (both jointly and independently) on the ELNet architecture.\n\nThe explanations for the sub-figures in Figure 2 will be updated with more details, and we thank the reviewer for pointing this out. \n\nRegarding the questions addressed by the reviewer:\n\n> How was \"most informative\" defined? What is the performance (quantitatively) of ELNet on finding this slice?\nThe \u201cmost informative\u201d slice refers to the image (in an MRI sequence) that contains the most area for which the tear resides. \n\nModel evaluation by the radiologist was carried out by randomly selecting one of the five cross validation splits. Samples (confirmed by the radiologist) were randomly selected from both classes of the validation set (following the split) resulting in 9 cases containing ACL tear and 7 cases without. The model being evaluated was trained on the training set (following the split) and of the 9 cases that contain ACL tear, our model\u2019s prediction of the most informative slice coincided with the radiologist's slice selection in 8 of the cases. Of the 7 cases where the ACL is intact, our model\u2019s prediction matched the radiologist\u2019s slice selection in all 7 cases. As noted by the reviewer, the details of the model evaluation will be added to our revision. \n\n> What is the intuition/formal definition of MCC and why is it preferred compared to the ROC-AUC measure?\n\nThe formal definition of the Matthew\u2019s Correlation Coefficient is : \n\n$$ MCC = \\frac{TP \\times TN - FP \\times FN}{\\sqrt{(TP+FP)(TP+FN)(TN+FP)(TN+FN)}} $$\n\nIntuitively, it measures how \u201cgood\u201d a classifier is at correctly predicting both the majority of the negative and positive cases. Given that the ROC-AUC refers to the area under the ROC curve (TPR and FPR are plotted for various classifier thresholds), the ROC-AUC is heavily influenced by the predicted positive class, making it an unreliable metric for evaluating class-imbalanced datasets (the case with the KneeMRI dataset). On the other hand, the MCC is invariant to class swapping, and is the preferred metric in the event of a class-imbalanced distribution for evaluating a classifier\u2019s performance. An article by Chicco et al addresses the advantages of the MCC in more detail : https://doi.org/10.1186/s12864-019-6413-7 \n\n> Why was the normalization performed in the slice direction only? Could it be of value in other spatial directions (e.g., in plane)?\nThe reason normalization was only applied in the slice direction relates to the assumption that features were to be extracted independently for each slice prior to the 2D-Max-Pool. If normalization was performed in the plane direction (as in Batch Normalization), each channel $c (1 <= c <= C)$ of each slices\u2019s feature representation would be standardized across all slices $s (1 <= s <= S)$. Doing so would imply dependence between slices when the designed feature extraction was intended to operate independently for each slice. Empirically speaking, models with Batch Normalization diverged after 10-15 epochs (will be added to the ablation study).\n\n> Which statistical tests were applied to show statistical significance for the numbers in bold of Table 2?\nThe numbers bolded in Table 2 were intended to highlight the evaluation metrics that demonstrate a favorable comparison between ELNet and MRNet on the MRNet dataset, and we apologize for the confusion. To the reviewer\u2019s inquiry, we performed a McNemar Test between the trained ELNet and MRNet on the three classification pathologies. We obtained a p-value of 0.009, 0.387, 0.99 for the classification of meniscus tear, general abnormalities, and ACL tear respectively. Thus, we may reject the null hypothesis that the two models' performances are equal for detecting meniscus tears, and we may not reject the null hypothesis in the case of detecting general abnormalities and ACL tears. As the reviewer mentioned, ELNet\u2019s performance is on par with MRNet in detecting general abnormalities and ACL tear, and exceeds MRNet\u2019s performance in detecting meniscus tear.\n\n> Will the source code be made publicly available upon publication?\nThe source code will be made available pending authorization from our academic institution. \n"}, {"title": "Re: A Model for Diagnosing Knee Pathologies", "comment": "We would like to thank the reviewer for a careful review of our work, and we appreciate the feedback provided.\n\nWe agree with the reviewer\u2019s remark regarding the statement we made \u201ca model with higher sensitivity is always preferred since better detection of true positives has always been the goal of automated diagnosis algorithms\u201d, and we will remove this statement from the final revision.\n\nThe reviewer was correct in interpreting that an ELNet was trained for each pathology, and we will make sure to clarify this in our revision. When using the term \u201cefficient\u201d, we referred to our model as being memory efficient and computationally efficient when compared with the SOTA MRNet. Additionally, for each pathology in the MRNet dataset, our model was trained on a single imaging stack and compares favorably to MRNet which was trained on all three imaging stacks for each pathology.\n\nRegarding the questions addressed by the reviewer:\n\n> Please provide quantification for this sentence:\n\u201cIn the majority of the cases provided, ELNet was able to indicate the most informative slice and has generated a heatmap that coincided with the region reported by the radiologist.\u201d\n\nModel evaluation by the radiologist was carried out by randomly selecting one of the five cross validation splits. Samples (confirmed by the radiologist) were randomly selected from both classes of the validation set (following the split) resulting in 9 cases containing ACL tear and 7 cases without. The model being evaluated was trained on the training set (following the split) and of the 9 cases that contain ACL tear, our model\u2019s prediction of the most informative slice coincided with the radiologist's slice selection in 8 of the cases. Of the 7 cases where the ACL is intact, our model\u2019s prediction matched the radiologist\u2019s slice selection in all 7 cases. As noted by the reviewer, the details of the model evaluation will be added to our revision. \n\n\n> What was the reason for choosing different values for K for two datasets? Did these values obtained using cross validation?\n\nYes, the values for K were determined empirically. Quantitatively, we selected a smaller K for ELNet\u2019s trained on the KneeMRI dataset since the training set contains around 730 samples for each split. The training set for the MRNet dataset contains 1,130 samples, and so a slightly bigger model (bigger K) was selected to avoid underfitting. \n"}, {"title": "Re: Method is well explained and the experiment for comparing with state-of-the-art is complete, but ablation study lacks.", "comment": "We would like to thank the reviewer for an in-depth review of our work, and we appreciate the comments made.\n\nAs suggested by the reviewer, we will add an ablation study to our final revision that investigates the effect our proposed multi-slice normalization compared with the other normalization methods (including Batch Norm) and the effects of Max-Pool compared with BlurPool performance (both jointly and independently) on the ELNet architecture.\n\nRegarding the questions addressed by the reviewer:\n\n> The purpose of BlurPool\n\nBlurPool is a pooling operation that mitigates the shift-variance phenomenon observed in CNN\u2019s that employ Max-Pooling. There are various kernel sizes (2-7) for defining a BlurPool kernel and the (frozen) weights are that of a 2D Binomial Filter. From a signal processing perspective, pooling operations are equivalent as spatial sampling. By applying BlurPool, feature representations are first passed through a low-pass filter before being pooled, and this preserves shift-invariance for feature representations in the CNN. In the backbone of the MRNet feature extractor, only Max-Pooling was employed, and in the ELNet feature extractor, pooling operations were all performed using BlurPool.\n\n> Lack of novelty. The is mainly because multi-slice normalization uses 2 existing normalization methods and it is trivial how different normalization method for different disease is chosen.\n\nWe agree with the reviewer that the multi-slice normalization utilizes 2 existing normalizations. However, contrast normalization is typically used for image stylization, and layer normalization is often used for NLP tasks. Given that 3D MRI images are the inputs to an ELNet, we believe that the unconventional use of contrast normalization and layer normalization (as opposed to the typical Batch Normalization) for a classification task should be considered a novelty.\n\nFor detecting meniscus tear, coronal images were selected, and contrast normalization was chosen. For detecting ACL tears and general abnormalities, axial images were selected and layer normalization was chosen. Although such choices were determined empirically, we believe that the reason certain multi-slice normalization works well for certain image stacks relates to the orientation the images were captured. \n\nIn addition, our empirical results (will be added to the ablation studies) demonstrate that training diverges after 10-15 epochs whenever Batch Norm was applied. This relates to the fact that Batch Norm applies an undesired standardization for each channel of feature representations across all slices, and therefore, multi-slice normalization plays a critical role in optimizing ELNet models for their respective tasks."}], "comment_replyto": ["5rdTfmciFg", "5JZxmvW70a", "fkvO-8tfNA"], "comment_url": ["https://openreview.net/forum?id=73rByVsMJu&noteId=5_3tN2yBmvf", "https://openreview.net/forum?id=73rByVsMJu&noteId=aP07o7Ak03-", "https://openreview.net/forum?id=73rByVsMJu&noteId=w9A7mkSnR6s"], "meta_review_cdate": 1586235952070, "meta_review_tcdate": 1586235952070, "meta_review_tmdate": 1586235952070, "meta_review_ddate ": null, "meta_review_title": "MetaReview of Paper117 by AreaChair1", "meta_review_metareview": "The reviewers agree that this is a well-written piece of work. The authors achieve promising results while proposing a very lightweight model architecture with regard to memory consumption. There is some concern about the lack of ablation studies that could show the benefit of some of the design choices made by the authors. However, the use of some normalization layers typically used in different fields of deep learning and the application of blur pooling are showing interesting results in this application and warrant further discussion. The authors are encouraged to add the missing ablation experiments to their final paper version.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper117/Area_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=73rByVsMJu&noteId=J7G9d10qCm"], "decision": "accept"}