File size: 17,844 Bytes
fad35ef
1
{"forum": "H1x-pmF0JE", "submission_url": "https://openreview.net/forum?id=H1x-pmF0JE", "submission_content": {"title": "Group-Attention Single-Shot Detector (GA-SSD): Finding Pulmonary Nodules in Large-Scale CT Images", "authors": ["Jiechao Ma", "Xiang Li", "Hongwei Li", "Bjoern H Menze", "Sen Liang", "Rongguo Zhang", "Wei-Shi Zheng"], "authorids": ["mjiechao@infervision.com", "lixiang651@gmail.com", "hongwei.li@tum.de", "bjoern.menze@tum.de", "lsen@infervision.com", "zrongguo@infervision.com", "wszheng@ieee.org"], "keywords": ["Lung Nodule Detection", "Single Shot Detector", "Attention Network", "Group Convolution"], "TL;DR": "Group-Attention Single-Shot Detector for Pulmonary Nodules Detection in Large-Scale CT Images", "abstract": "Early diagnosis of pulmonary nodules (PNs) can improve the survival rate of patients and yet is a challenging task for radiologists due to the image noise and artifacts in computed tomography (CT) images. In this paper, we propose a novel and effective abnormality detector implementing the attention mechanism and group convolution on 3D single-shot detector (SSD) called group-attention SSD (GA-SSD). We find that group convolution is effective in extracting rich context information between continuous slices, and attention network can learn the target features automatically. We collected a large-scale dataset that contained 4146 CT scans with annotations of varying types and sizes of PNs (even PNs smaller than 3mm). To the best of our knowledge, this dataset is the largest cohort with relatively complete annotations for PNs detection. Extensive experimental results show that the proposed group-attention SSD outperforms the conventional SSD framework as well as the state-of-the-art 3DCNN, especially on some challenging lesion types.", "pdf": "/pdf/5c68c6eede3675baaa541758987d923aaff04712.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "ma|groupattention_singleshot_detector_gassd_finding_pulmonary_nodules_in_largescale_ct_images", "_bibtex": "@inproceedings{ma:MIDLFull2019a,\ntitle={Group-Attention Single-Shot Detector ({\\{}GA{\\}}-{\\{}SSD{\\}}): Finding Pulmonary Nodules in Large-Scale {\\{}CT{\\}} Images},\nauthor={Ma, Jiechao and Li, Xiang and Li, Hongwei and Menze, Bjoern H and Liang, Sen and Zhang, Rongguo and Zheng, Wei-Shi},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=H1x-pmF0JE},\nabstract={Early diagnosis of pulmonary nodules (PNs) can improve the survival rate of patients and yet is a challenging task for radiologists due to the image noise and artifacts in computed tomography (CT) images. In this paper, we propose a novel and effective abnormality detector implementing the attention mechanism and group convolution on 3D single-shot detector (SSD) called group-attention SSD (GA-SSD). We find that group convolution is effective in extracting rich context information between continuous slices, and attention network can learn the target features automatically. We collected a large-scale dataset that contained 4146 CT scans with annotations of varying types and sizes of PNs (even PNs smaller than 3mm). To the best of our knowledge, this dataset is the largest cohort with relatively complete annotations for PNs detection. Extensive experimental results show that the proposed group-attention SSD outperforms the conventional SSD framework as well as the state-of-the-art 3DCNN, especially on some challenging lesion types.},\n}"}, "submission_cdate": 1544618937107, "submission_tcdate": 1544618937107, "submission_tmdate": 1561398453220, "submission_ddate": null, "review_id": ["rygt69P37V", "HklLh2DF7E", "H1xgJZt6Q4"], "review_url": ["https://openreview.net/forum?id=H1x-pmF0JE&noteId=rygt69P37V", "https://openreview.net/forum?id=H1x-pmF0JE&noteId=HklLh2DF7E", "https://openreview.net/forum?id=H1x-pmF0JE&noteId=H1xgJZt6Q4"], "review_cdate": [1548675777121, 1548479662082, 1548746968291], "review_tcdate": [1548675777121, 1548479662082, 1548746968291], "review_tmdate": [1548856754031, 1548856732146, 1548856684976], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper15/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper15/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper15/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["H1x-pmF0JE", "H1x-pmF0JE", "H1x-pmF0JE"], "review_content": [{"pros": "-\tGreat database. \n-\tThe group attention module is an interesting idea\n", "cons": "-\tGeneral poor presentation. The paper is very hard to follow. \n-\tLack of comparison to algorithms specifically designed for lung cancer detection, such as Setio16 or Wang 18. \n-\tNo indication model complexity between the proposed method and the alternatives.\n-\t\u201call these algorithms neither making use of the spatial relations between slices\u201d \u2013 false. Setio16 does, since does planar-reformatting. Wang18 does, since they use 3D volumes.\n-\tThe reviewer wonders how other better performing detection networks compare to the proposed method, such as YOLO9000, which outperforms candidate-selection-and-classification-networks being a unified framework.\n-\t\u201cThe proposed method showed superior sensitivity and fewer false positives compared to previous frameworks\u201d. That statement does not hold from Table 3. The sensitivity is only superior for ggn nodules. There is no overall sensitivity column in that table. This reviewer believes that the method is non-superior to the state-of-the art, it is less sensitive and has less false positives, this means that it is operating at another point of the ROC. \n", "rating": "2: reject", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "In this work, the authors present to leverage an attention mechanism to single-shot detector networks and to apply it to pulmonary nodule detection. They emphasize their world's largest dataset of CT scans with annotations of varying types and sizes of pulmonary nodules. \n\nMethodologically, the main contribution of this work is to design a group attention network that can be injected in the existing network architectures. ", "cons": "While the reported performance presents the superiority of the proposed method, the detailed information in implementation  is missing such as the complete network architecture, number of groups in group convolution, loss functions, etc.\n\nTo better justify the effectiveness of the proposed method, it is highly recommended to experiment over the LUNA16 dataset and compare with the scores listed in the leaderboard.\n\nTable 2 and the statement of \"~ using the GA module could help the model learn more important feature layers.\": Regarding these, it would be interesting to see the weights estimated by GA modules and how those affected the performance.", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "1. Very good database\n2. The proposed group-attention mechanism is not only novel but also make sense. The authors successfully build an attention-based detection model in the pulmonary nodule detection.\n3. The proposed model works well in the proposed large dataset.\n4. The authors indicate the proposed group-attention  mechanism works!", "cons": "There may need more comparison experiments.\nThere are also some minor writing issues.", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": ["HyeTc_HT4V", "HJlqi3STEE", "SkgRgeIpNN"], "comment_cdate": [1549781140566, 1549782178123, 1549783030107], "comment_tcdate": [1549781140566, 1549782178123, 1549783030107], "comment_tmdate": [1555945987825, 1555945987608, 1555945983107], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper15/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper15/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper15/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "response to Reviewer 1", "comment": "We thank the Reviewer for the valuable and positive feedbacks. We would like to clarify some raised issues. \n\n- Cons: There may need more comparison experiments. There are also some minor writing issues. \n\n- Response: Thanks for the suggestion. Some additional experiments have been done on our dataset and the public dataset (LIDC-IDRI) [2]. Firstly, we further compare the proposed method with other good-performing detection networks, such as YOLO9000, and the results show that it outperforms YOLO9000[1] by a large margin. Secondly, we compare with the state-of-the-art method on the LIDC-IDRI. The proposed method achieved comparable results. These results will be updated in the final version. Necessarily we will further polish the manuscript. \n\n\nReferences:\n1. Redmon, Joseph, and Ali Farhadi. \"YOLO9000: better, faster, stronger.\" arXiv preprint (2017).\n2. Armato III, Samuel G., et al. \"The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans.\" Medical physics 38.2 (2011): 915-931.\n"}, {"title": "response to Reviewer 3", "comment": "We thank the Reviewer for the valuable feedbacks. We would like to clarify some raised issues. \n\n- Lack of comparison to algorithms specifically designed for lung cancer detection, such as Setio16 or Wang18.\n\nRe: As listed in Table.3 of the manuscript, we have compared the proposed method with Wang18 [1]. For method [2], we found that it was not suitable for comparison because [2] is not an end-to-end approach of which some components are designed for specific types of nodule, i.e. solid, sub-solid and large. Thus this framework could not be adopted to tackle the challenging detection task in this paper which aims at detecting eight types of nodule. We further compared our method with Wang18 [1] on LIDC-IDRI dataset [3] and achieve comparable results (CPM scores: 0.863 vs 0.878). This will be updated in the final version.\n\n\n- No indication model complexity between the proposed method and the alternatives.\n\nRe: Thanks for the comments. In the current framework, the number of parameters of the proposed method are similar to the ResNext, i.e., around 70k. For [1],  it consists of two frameworks (3D-based candidate boxes extraction network and 3D-based candidate boxes (false positives) reducing network). Our method is computational efficient compared to [1] in terms of parameter complexity.  This will be updated in the final version.\n\n\n- \u201cAll these algorithms neither making use of the spatial relations between slices\u201d \u2013 false. Setio16 does, since does planar-reformatting. Wang18 does, since they use 3D volumes.\n\nRe: Thanks for the comment. The attention mechanism proposed in the paper is different from the above approaches which use multi-view and 3D information respectively. We would like to clarify it to avoid misleading. It should be \u201call these algorithms neither making use of the spatial attention across the neighbouring slices\u201d. \n\n\n- The reviewer wonders how other better performing detection networks compare to the proposed method, such as YOLO9000, which outperforms candidate-selection-and-classification-networks being a unified framework.\n\nRe: Thanks for the constructive comment. Indeed, YOLO9000 is a state-of-the-art network for real-time object detection system. We further used YOLO9000 in the proposed framework and found the final results were comparable with SSD (CPM: 0.499 vs 0.533). In this paper, we focus on the group-attention mechanism built upon the SSD. The results of YOLO9000 will be updated in the final version.\n\n\n- \u201cThe proposed method showed superior sensitivity and fewer false positives compared to previous frameworks\u201d. That statement does not hold from Table 3. The sensitivity is only superior for ggn nodules. There is no overall sensitivity column in that table. This reviewer believes that the method is non-superior to the state-of-the art, it is less sensitive and has less false positives, this means that it is operating at another point of the ROC.\n\nRe: Thanks for the comments. CPM is a common metric to evaluate the detection performance which considers both sensitivity and false positives rate.  We achieve the highest CPM among the listed methods. From Table.3, for the 6 classes excluding \u201cp.ggn\u201d and \u201cm.ggn\u201d, the sensitivity rates are very comparable to the state-of-the-art. For \u201cp.ggn\u201d and \u201cm.ggn\u201d, the sensitive rates outperform the state-of-the-art by a large margin.\n\n\nReferences: \n[1] Bin Wang, Guojun Qi, Sheng Tang, Liheng Zhang, Lixi deng, and Yongdong Zhang. Automated Pulmonary Nodule Detection: High Sensitivity with Few Candidates. \n[2] Setio, Arnaud Arindra Adiyoso, et al. \"Pulmonary nodule detection in CT images: false positive reduction using multi-view convolutional networks.\"\n[3] Armato III, Samuel G., et al. \"The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans.\" Medical physics 38.2 (2011): 915-931.\n\n"}, {"title": "response to Reviewer 2", "comment": "We thank the Reviewer for the valuable and positive feedbacks. We would like to clarify some raised issues. \n\n- While the reported performance presents the superiority of the proposed method, the detailed information in implementation is missing such as the complete network architecture, number of groups in group convolution, loss functions, etc.\n\nRe: Thanks for the constructive comments. More details including the experimental setting, detailed setting of the network architecture (e.g. height, width and numbers of feature maps in each layer), number of groups in group convolution, loss functions, optimization stratedy (e.g. learning rate and Adam optimizer) as well as the computational complexity,  will be included in the updated version.\n\n\n- To better justify the effectiveness of the proposed method, it is highly recommended to experiment over the LUNA16 dataset and compare with the scores listed in the leaderboard.\n\nRe: Thanks for the constructive comments. For the LUNA16 challenge, the leaderboard is not updating any longer and the dataset is not open to the public now. To better justify the effectiveness of the proposed method, we conduct experiments over the LIDC-IDRI dataset [1] and obtained the competitive result with the state-of-the-art method [2]. We will updated this in the final version.\n \n\n- Table 2 and the statement of \"~ using the GA module could help the model learn more important feature layers.\u201d Regarding these, it would be interesting to see the weights estimated by GA modules and how those affected the performance.\nRe: Thanks. As showed in figure 1, we use a GA layer to automatically learn the weights between the every successive layers. For example, we use the GA layer to tackle the concatenated feature which consisted of C5 (N,H/16,W/16,C) and C6 (N,H/32,W/32,C).  We use the GA layer (the number of parameters is N * H/16 * W/16 * C) in the 2C number of channels, the channel with better receptive field (better score for the loss) is given higher value of weight, while the channel with worse receptive field ( worse score for the loss) is given lower value of weight. For the obtained P4 layer features, it learns the important channels automatically, which improves the efficiency of the model.\n\nReferences:\n[1] Armato III, Samuel G., et al. \"The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans.\" Medical physics 38.2 (2011): 915-931.\n[2] Bin Wang, Guojun Qi, Sheng Tang, Liheng Zhang, Lixi deng, and Yongdong Zhang. Automated Pulmonary Nodule Detection: High Sensitivity with Few Candidates. \n"}], "comment_replyto": ["H1xgJZt6Q4", "rygt69P37V", "HklLh2DF7E"], "comment_url": ["https://openreview.net/forum?id=H1x-pmF0JE&noteId=HyeTc_HT4V", "https://openreview.net/forum?id=H1x-pmF0JE&noteId=HJlqi3STEE", "https://openreview.net/forum?id=H1x-pmF0JE&noteId=SkgRgeIpNN"], "meta_review_cdate": 1551356597389, "meta_review_tcdate": 1551356597389, "meta_review_tmdate": 1551881983149, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "The reviewers agree on the novelty of group attention module and its suitability for the given task of pulmonary nodule detection. \n\nAdditionally, they highlight the scale and diversity of the CT dataset used in training/evaluation of the proposed model. \n\nIt's shame to see that the manuscript does not mention or discuss any of the previous work produced by the medical image analysis community, which utilised different sorts of attention modules in classification, localisation, and segmentation tasks. I recommend the authors to review the existing literature on attention modelling more carefully. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=H1x-pmF0JE&noteId=S1e62M8SIV"], "decision": "Accept"}