AMSR / conferences_raw /midl19 /MIDL.io_2019_Conference_B1epyN8rlV.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
27.4 kB
{"forum": "B1epyN8rlV", "submission_url": "https://openreview.net/forum?id=B1epyN8rlV", "submission_content": {"title": "Assessing Knee OA Severity with CNN attention-based end-to-end architectures", "authors": ["Marc G\u00f3rriz", "Joseph Antony", "Kevin McGuinness", "Xavier Gir\u00f3-i-Nieto", "Noel E. O'Connor"], "authorids": ["algayon2@gmail.com", "joseph.antony@dcu.ie", "kevin.mcguinness@dcu.ie", "xavier.giro@upc.edu", "noel.oconnor@dcu.ie"], "keywords": ["Convolutional Neural Network", "End-to-end Architecture", "Attention Algorithms", "Medical Imaging", "Knee Osteoarthritis"], "TL;DR": "Assessing Knee OA Severity with CNN attention-based end-to-end architectures", "abstract": "This work proposes a novel end-to-end convolutional neural network (CNN) architecture to automatically quantify the severity of knee osteoarthritis (OA) using X-Ray images, which incorporates trainable attention modules acting as unsupervised fine-grained detectors of the region of interest (ROI). The proposed attention modules can be applied at different levels and scales across any CNN pipeline helping the network to learn relevant attention patterns over the most informative parts of the image at different resolutions. We test the proposed attention mechanism on existing state-of-the-art CNN architectures as our base models, achieving promising results on the benchmark knee OA datasets from the osteoarthritis initiative (OAI) and multicenter osteoarthritis study (MOST). All the codes from our experiments will be publicly available on the github repository: \\url{https://github.com/marc-gorriz/KneeOA-CNNAttention}", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "pdf": "/pdf/c0bdbe67cfd246db72d139edb1af1af7be6e08a8.pdf", "paperhash": "g\u00f3rriz|assessing_knee_oa_severity_with_cnn_attentionbased_endtoend_architectures", "_bibtex": "@inproceedings{g\u00f3rriz:MIDLFull2019a,\ntitle={Assessing Knee {\\{}OA{\\}} Severity with {\\{}CNN{\\}} attention-based end-to-end architectures},\nauthor={G{\\'o}rriz, Marc and Antony, Joseph and McGuinness, Kevin and Gir{\\'o}-i-Nieto, Xavier and O'Connor, Noel E.},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=B1epyN8rlV},\nabstract={This work proposes a novel end-to-end convolutional neural network (CNN) architecture to automatically quantify the severity of knee osteoarthritis (OA) using X-Ray images, which incorporates trainable attention modules acting as unsupervised fine-grained detectors of the region of interest (ROI). The proposed attention modules can be applied at different levels and scales across any CNN pipeline helping the network to learn relevant attention patterns over the most informative parts of the image at different resolutions. We test the proposed attention mechanism on existing state-of-the-art CNN architectures as our base models, achieving promising results on the benchmark knee OA datasets from the osteoarthritis initiative (OAI) and multicenter osteoarthritis study (MOST). All the codes from our experiments will be publicly available on the github repository: {\\textbackslash}url{\\{}https://github.com/marc-gorriz/KneeOA-CNNAttention{\\}}},\n}"}, "submission_cdate": 1545065445194, "submission_tcdate": 1545065445194, "submission_tmdate": 1561399347457, "submission_ddate": null, "review_id": ["B1lg0ibsQ4", "HyghCj_qmE", "SJgAxOj3Q4"], "review_url": ["https://openreview.net/forum?id=B1epyN8rlV&noteId=B1lg0ibsQ4", "https://openreview.net/forum?id=B1epyN8rlV&noteId=HyghCj_qmE", "https://openreview.net/forum?id=B1epyN8rlV&noteId=SJgAxOj3Q4"], "review_cdate": [1548585927532, 1548549076174, 1548691446309], "review_tcdate": [1548585927532, 1548549076174, 1548691446309], "review_tmdate": [1548856739735, 1548856738410, 1548856701408], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper157/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper157/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper157/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1epyN8rlV", "B1epyN8rlV", "B1epyN8rlV"], "review_content": [{"pros": "The authors aimed to investigate the effect of incorporating attention modules to various CNN architectures for automatically grading knee radiographs based on the OA severity. Supervised training using clinically accepted KL-grade as a ground truth is combined with unsupervised attention module training proposed by Mader2018 to achieve the goal. Related work and attention module structure are explained in great detail; however, the experiments and results require better explanation and further refinement.\n\npros:\n1-\tAutomatically grading the knee OA severity based on KL-grade will reduce the work load of radiologists and it could potentially enable automated OA progression measurements in clinics.\n2-\tIt was shown that attention modules can be inserted into various locations of the CNN architecture. \n\n\n", "cons": "1-\tThe study proposes to use attention modules to remove the need for localization of the knee joints prior to classification. It is mentioned that the need for knee joint localization step affects the quantification accuracy negatively and adds further complexity to the training process. Even though the attention modules remove the necessity for knee joint localization (proposed by the authors), it still adds further complexity to the modelling and training process compared to previous approaches (multi-loss, concatenation of features, where to locate attention modules and so on). Moreover, the results presented using attention modules are performing worse than the CNNs without attention modules ( ~ 6% lower accuracy in Table 2 vs Table 3, and as mentioned by the authors in the Conclusions.). This reduction in the accuracy needs to be properly investigated and the reasons should be identified in detail. These are the fundamental issues with this paper which needs to be addressed in detail. Some suggestions/required improvements:\n1.a.\tWe need to know if the attention module improves the accuracy or not. This could be done by comparing CNNs without attention modules (which accepts either the full knee image or localized knee joint images) and with attention modules in a systematic way. In the current manuscript, this improvement, if any, cannot be distinguished from other factors. \n1.b.\tThe authors should provide results for the Resnet-50 and VGG-16 architectures without attention modules. This information is missing in the current Tables 2 and 3. Because of this reason, the readers cannot be clear if the improvement over Antony et al.\u2019s models are due to attention modules or changes in the CNN architecture.\n\n2-\tIt is not clear if the data used in calculating the Kappa from 150 subjects from OAI dataset is in the test set of the original split or not. If these images were used for training or validation, this raises a question on the validity of the results presented in Table 3.\n\n3-\tFurther details are required for Section 3.3. I expect that the size of the fully connected (FC) layer after channel-wise concatenation will have an effect on the training. The size of the FC was not defined and its effect on the accuracy was not experimented. In addition, it was mentioned that several multi-branch combinations are tested in multi-loss training without giving details. It is not clear if this was achieved by some sort of a grid search or using a few empirical combinations. Please add required details to improve our understanding of the effect of attention branch locations and their corresponding weights to the loss function.\n\n4-\tDataset generation needs corrections/explanations: \n4.a.\tIt is mentioned that the OAI dataset has 4,476 participants at the baseline, but it has 4,796 subjects.\n4.b.\tTraining/validation/testing set are generated from images. Are there any specific reason for the authors not to generate these sets based on subjects? Is it possible that data splitting from images could add a bias to the results presented?\n\n5-\tThe manuscript has several typos, please fix them. For example: page 1: bony --> bone, page 5: focuse --> focus/focused, page 9: Table ?? --> Table 3.", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "The paper is technically sound and propose an interesting approach to fuse two otherwise separate steps --localization and classification/regression -- that are necessary of knee OA severity assessment. \n\nAlthough the paper is too long, and it could have been shortened in some parts and sections, the authors included a lot of material to better position their findings (appendix etc). The structure of the paper, and especially the state of the art analysis is very good. \n\nThe approach has some novelty to it. It uses multiple losses that are combined with weights chosen manually according to the rationale that deeper layers learn faster and overfit more. Using attention is not per-se new and the authors position their paper nicely with respect to previous approaches, but attention modules in this context have the potential of simplifying training and inference as well as improve results. \n\nThe results that are presented are well related with state of the art results and are convincing. Some comparisons with other approaches have in fact been shown and crucial aspects of the algorithms and method explored.\n\nThe result section is well structured and I have liked that the authors showed the performances of att0 att1 and att2 separately to give an idea of the behavior of these predictions heads. The results seem interesting and I agree with the authors when they say that this research brings a valuable contribution to the community. ", "cons": "The paper seems not to respect the conference format which dictates a maximum number of pages that is smaller than the number of pages of this submission.\n\nThe method of Tiulpin et al 2018 which uses a siamese network could have been explained better, what does it mean that they use symmetry of x-ray knee images. \n\nThere are repetitions and long sentences that take up a lot of space without conveying anything strictly useful. Both introduction and experimental sections can be shortened without changing much of the meaning.\n\nThe network architecture is not very clear. I would like to see a schematic representation of the network which in this moment seems to have prediction branches due to the presence of attention mechanism at different points in the network. A schematic example of this would clarify much of what actually happens in this method. Figure 2 clarifies something but it would be nice to see where are the prediction layers placed together with respective losses. In other words, the authors need to structure their method section, prioritize things they want to explain, give a panoramic view on their approach and then zoom in to the details about how they define their losses etc.\n\nFigure 1 is confusing because the image gets rotated and N (number of channel) shifts place with another axis. Would be better to keep it consistent. \n\nThe results are not state of the art, although the method is much more difficult to implement and train due to the presence of the early fusion or multi losses (that require manually picked weights). ", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "1. Work proposes using attention modules after various layers in a CNN to predict the severity of the knee osteoarthiritis (OA). \n\n2. The paper's text is well-written. Although the 'attention module' by itself is not a novel contribution (convolutions followed by activation has existed in literature before [1]), the combination of attention blocks from multiple resolutions using a multi-loss paradigm is novel.\n\n3. A test of the module's adaptability to various architectures is interesting \n\n[1] Oktay et al. 'Attention U-Net:Learning Where to Look for the Pancreas'. In: MIDL 2018", "cons": "1. Conclusions look very heuristic and the authors do not try to explain them. Ex: In table2 (Early fusion), why does att2 not feature in ResNet and VGG, but features in Anthony et al.s' version? Based on the architecture details, this might have something to do with the resolutions attended by these layers.\n\n2. Results and Table 2 are presented in an unclear fashion and a redesign is strongly suggested. Ex: in pg. 7, text below fig. 3: Text says \"Best performance achieved with attention branches att0 and att1...\". However, Multi-Loss section in table 2 lists the performance numbers separately. Again, text in pg. 8 states \"... the VGG-16 attention branch att0, achieved the best classification performance...\". Is there no optimal combination of attention branches in Multi-loss?\n\n3. Captions of figures and tables provide little information. Similarly, the take away from the captions of loss curves and activation maps in the appendices is not obvious.", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["SygrB0t6EE", "SJgIy19aNV", "H1lqrl9pN4", "S1eK_dgGr4"], "comment_cdate": [1549798973316, 1549799134301, 1549799489909, 1550088304974], "comment_tcdate": [1549798973316, 1549799134301, 1549799489909, 1550088304974], "comment_tmdate": [1555945985492, 1555945985274, 1555945966109, 1555945956849], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper157/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper157/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper157/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper157/AnonReviewer3", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "No Title", "comment": "Cons: 1 - \n\nThe attention mechanism is designed to be adaptable to any CNN pipeline. However, the distribution of branches can be variable depending on the architecture of the base network. We tested models with different depth and complexity, so the same level of abstraction in att1 and att2 for Antony et. al models can be achieved in att0 and att1 for deeper architectures such as VGG or ResNet. After testing different combinations, the locations with better performance are those presented in Table 2. Specifically, Appendix A shows the architecture of all the tested models with the output resolution of their convolutional blocks and the location of the attention branches.\n\nCons: 2 - \n\nThe training and testing methodology are different in the multi-loss models; we agree that this was unclear in the method section and we will revise this accordingly. At training time the selected attention branches (following the outline described above and presented in Table 2) are combined in a global loss function by adding a weighted version of each individual loss. Once the model is trained, since each attention branch produces an independent prediction, the top performing one is used at test time. \n\nA more sophisticated ensemble approach was considered but not included in this paper due to time constraints. This approach involves averaging the pre-activation outputs (i.e. values before the softmax) of each of the model branches and then passing the result through a softmax to enforce a valid probability distribution over the classes. This idea is often effective in test time data augmentation and ensemble methods and may improve performance over the single best model described here.\n\nCons: 3 - \n\nWe will improve the captions in the camera ready version ensuring that they are more descriptive and self-explanatory. \n"}, {"title": "No Title", "comment": "Cons: 1 - \n\nAs mentioned in the related work, Antony. et al. [1] investigated the use of CNNs for automatically assessing knee OA severity. Several approaches were tested for classifying or regressing the KL grades from already extracted knee joints, including SVM-based methods and CNNs with different complexity (VGG-M-128, VGG-16, BVLC CaffeNet and AlexNet). However, the best results were obtained by designing new models optimized for the task, with the aim to reduce the complexity of already tested architectures for preventing overfitting during training. Definitely, we should include those results in this paper, which can help us to contrast the improvements of the attention methodology.\n\nHowever, as shown in Table 2, the attention mechanism does not achieve a desirable performance in the Antony et. al. models since their shallow architectures do not provide sufficient abstraction in the attention features, while slightly deeper models such as VGG-16 with attention branches located at shallow positions outperform them without falling into overfitting. \n\nCons: 2 - \n\nThe OAI split used by the radiologic reliability readings to compute the Kappa grading does not match our testing set which also includes data from MOST dataset. However, we followed the methodology of some works in the state-of-the-art [2] [3], in which those values were used for making an approximated overview of the current gold standard for diagnosing OA with the aim of compare their results to the human level accuracy. We will revise this section for the camera ready version giving more detail in how this values are computed and specifying that they are providing an approximated comparison.\n\nCons: 3 - \n\nRegarding the early fusion methodology, several fully connected layer combinations were tested in the experimentation, evaluating their effect on the convergence behaviour during training. The best performance was achieved with a single 512-dimensional FC layer, but for brevity reasons we did not include this analysis in this paper. Similar are the reasons for the multi-loss training methodology, in which we performed a grid search for the best branch locations (following the outline described before in this rebuttal) to cross-validate the loss weights between a range of 0.5 to 1 with a step size of 0.1, using the validation loss as monitor. We will include more details in the camera ready version and complementary figures in the Annexes if necessary.\n\nCons: 4a - \n\nFor many participants in the OAI dataset the KL grades were missing either for right or left knee. In total we selected the 4,476 participants with the KL gradings available for both knees.\n\nCons: 4b - \n\nThe training, validation, and test sets were split based on the KL grades distribution. A split of 70-30 split for training-test sets were used. 10% of the training data was used for validation. As seen in previous works of Antony et. al. [1], there were not much variations in the overall results when changing the training and test sets.\n\nCons: 5 - \n\nWe will do an orthographic checking for the camera ready version and correct possible typos.\n\n\n[1] Antony, A. Joseph. \"Automatic quantification of radiographic knee osteoarthritis severity and associated diagnostic features using deep convolutional neural networks.\" PhD diss., Dublin City University, 2018. \n[2] Tiulpin, Aleksei, J\u00e9r\u00f4me Thevenot, Esa Rahtu, Petri Lehenkari, and Simo Saarakkala. \"Automatic knee osteoarthritis diagnosis from plain radiographs: a deep learning-based approach.\" Scientific reports 8, no. 1 (2018): 1727.\n[3] Klara, Kristina, Jamie E. Collins, Ellen Gurary, Scott A. Elman, Derek S. Stenquist, Elena Losina, and Jeffrey N. Katz. \"Reliability and accuracy of cross-sectional radiographic assessment of severe knee osteoarthritis: role of training and experience.\" The Journal of rheumatology (2016): jrheum-151300.\n"}, {"title": "No Title", "comment": "Rebuttal for each paragraph in the review: \n\nCons: 1 - \n\nThe paper is currently 10 pages, exceeding the limit of 8 pages (excluding references, acknowledgments and appendix). We will revise the structure for the camera ready version to make the paper more succinct, reducing the introduction and related work and placing more emphasis on the method.\n\nCons: 2 - \n\nThe Tiulipin et al 2018 method represents the state-of-the-art for Knee OA diagnosis using CNNs. We will include a new paragraph to highlight the importance of this methodology. Specifically, the method proposes the use of Siamese CNNs, which are originally designed to learn a similarity metric between pairs of images. However, rather than comparing image pairs, the authors extend this idea to similarity in knee x-ray images (with 2 symmetric knee joints). Splitting the images at the central position and feeding both knee joints into a separate CNN branch allows the network to learn identical weights for both branches.\n\nCons: 3 - \n\nWe will recheck those parts and reduce redundancy; a reduction of long sentences will be a key consideration for producing the next version. \n\nCons: 4 - \n\nOverall, each attention branch produces an independent prediction of the input images by applying the attention modules in specific network locations (taking a convolutional volume as input) and then applying a softmax layer at the top to produce specific predictions. In that sense, Figure 2 shows a visualization of the attention mask for each branch but does not describe the overall architecture. For the camera ready version, we will enhance this figure, sketching the described methodology and including the attention visualization as a complement to this.\n\nCons: 5 - \n\nWe will rotate the left side of the figure to keep consistency in the channel axis and redraw the figure in the camera ready version. \n\nCons: 6 - \n\nAlthough our method does not surpass the state-of-the-art and could be interpreted as difficult to implement, the overall aim was to reduce the training complexity using an end-to-end architecture. As mentioned in Section 4, without an end-to-end design, the models require a localisation step to focus the classifier to the knee joint regions of interest. For instance, previous work of Antony et. al. trained a FCN network to automatically segment the input knee joints, needing a manual annotation process for masking all the training data. Our approach, in contrast, requires no such annotation of knee joint locations in the training data. Localising the knee joints in an unsupervised way can reduce performance by adding noise in the attention masks and thus into the overall process. A more robust attention module can improve the results and have a bigger impact in the future.\n"}, {"title": "Response to rebuttal", "comment": "I thank the authors to answer my questions regarding the manuscript. I am satisfied by their response to Cons 3-5. However, my first two questions were not addressed in great detail. \n\nTo be more specific: \nCons #1: I understand that the data presented here gives us some idea about the negative effect of adding attention mechanisms into shallow networks (Antony et al.'s accuracy without attention mechanism is mentioned on page 3 and the accuracy of its variant with attention mechanism is provided in Table 2). This result supports the authors claim that the attention mechanism doesn\u2019t work well with Antony et al.s network. However, there is no information about the accuracy of networks VGG-16 and ResNet-50 without attention mechanism in the manuscript. So, I am not sure how the authors can claim that the attention mechanism improves accuracy on \u201cslightly deeper\u201d networks. In the way results presented in the current manuscript lacks a comparison which can support authors\u2019 claim. Because of this reason, it is not clear if the proposed architectural change via attention mechanism improves the accuracy of the classification task or not. Please add an additional row to the Table 2, showing the accuracy results of CNNs without attention mechanism. This is one of the ways to support the authors claim and convince the readers.\n\nCons #2: The authors explained the methodology they used to calculate Kappa coefficients in the manuscript. My original question was more related to if any of the 150 subjects were in the test set or not. I am less interested if these 150 subjects matches the test set or not. Specifically, we need to know if any of these subjects were in the training set or they were all in the test set. This information is important which needs to be clearly identified in the manuscript. For example, if the majority of the 150 people were in the training set or validation set, I don\u2019t think that the authors comparison of their models\u2019 results to human level accuracy will be fair. If all the 150 people were in the test set, then I am confident with the results presented here. Please provide more information about how are 150 subjects distributed into the train/validation and test sets\n\nThe author comments to Cons#1-2 reduce my enthusiasm about the manuscript. The claim of using attention mechanism improves accuracy on \u201cdeeper networks\u201d was not tested/reported properly by the original submission and my comments on it was not addressed directly in the rebuttal. Unless the authors aim to prove their claim providing sufficient results prior to the conference, unfortunately, I will need to change my rating from 3-accept to 2-reject. \n"}], "comment_replyto": ["SJgAxOj3Q4", "B1lg0ibsQ4", "HyghCj_qmE", "SJgIy19aNV"], "comment_url": ["https://openreview.net/forum?id=B1epyN8rlV&noteId=SygrB0t6EE", "https://openreview.net/forum?id=B1epyN8rlV&noteId=SJgIy19aNV", "https://openreview.net/forum?id=B1epyN8rlV&noteId=H1lqrl9pN4", "https://openreview.net/forum?id=B1epyN8rlV&noteId=S1eK_dgGr4"], "meta_review_cdate": 1551356595857, "meta_review_tcdate": 1551356595857, "meta_review_tmdate": 1551881982351, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "The reviewers commented positively on the motivation to combine classification and localization into one step, on the evaluation of how best to make use of attention modules and on the clinical motivation of the work. \n\nHowever, the reviewers have pointed out insufficient clarity in the text and the overlengths of the manuscript. Furthermore, two of the reviewers mentioned that the evaluation appears lacking key comparisons and that the conclusions are heuristic. \n\nI believe that the points regarding clarity and lengths can be addressed for the camera ready version. The unclarities about the evalation cast doubt on the merit of the proposed method prevent this from being a top submission. Nevertheless, I follow the recommendation of 2/3 reviewers to accept this paper with a poster presentation. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1epyN8rlV&noteId=r1e22f8BLN"], "decision": "Accept"}