{"forum": "B1lpb10Ry4", "submission_url": "https://openreview.net/forum?id=B1lpb10Ry4", "submission_content": {"title": "XLSor: A Robust and Accurate Lung Segmentor on Chest X-Rays Using Criss-Cross Attention and Customized Radiorealistic Abnormalities Generation", "authors": ["Youbao Tang", "Yuxing Tang", "Jing Xiao", "Ronald M. Summers"], "authorids": ["youbao.tang@nih.gov", "yuxing.tang@nih.gov", "xiaojing661@pingan.com.cn", "rms@nih.gov"], "keywords": ["Lung segmentation", "chest X-ray", "criss-cross attention", "radiorealistic data augmentation"], "abstract": "This paper proposes a novel framework for lung segmentation in chest X-rays. It consists of two key contributions, a criss-cross attention based segmentation network and radiorealistic chest X-ray image synthesis (i.e. a synthesized radiograph that appears anatomically realistic) for data augmentation. The criss-cross attention modules capture rich global contextual information in both horizontal and vertical directions for all the pixels thus facilitating accurate lung segmentation. To reduce the manual annotation burden and to train a robust lung segmentor that can be adapted to pathological lungs with hazy lung boundaries, an image-to-image translation module is employed to synthesize radiorealistic abnormal CXRs from the source of normal ones for data augmentation. The lung masks of synthetic abnormal CXRs are propagated from the segmentation results of their normal counterparts, and then serve as pseudo masks for robust segmentor training. In addition, we annotate 100 CXRs with lung masks on a more challenging NIH Chest X-ray dataset containing both posterioranterior and anteroposterior views for evaluation. Extensive experiments validate the robustness and effectiveness of the proposed framework. The code and data can be found from https://github.com/rsummers11/CADLab/tree/master/Lung_Segmentation_XLSor .", "pdf": "/pdf/ac63b148df61e8935f2426cfaead089bf74ddd90.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "tang|xlsor_a_robust_and_accurate_lung_segmentor_on_chest_xrays_using_crisscross_attention_and_customized_radiorealistic_abnormalities_generation", "_bibtex": "@inproceedings{tang:MIDLFull2019a,\ntitle={{\\{}XLS{\\}}or: A Robust and Accurate Lung Segmentor on Chest X-Rays Using Criss-Cross Attention and Customized Radiorealistic Abnormalities Generation},\nauthor={Tang, Youbao and Tang, Yuxing and Xiao, Jing and Summers, Ronald M.},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=B1lpb10Ry4},\nabstract={This paper proposes a novel framework for lung segmentation in chest X-rays. It consists of two key contributions, a criss-cross attention based segmentation network and radiorealistic chest X-ray image synthesis (i.e. a synthesized radiograph that appears anatomically realistic) for data augmentation. The criss-cross attention modules capture rich global contextual information in both horizontal and vertical directions for all the pixels thus facilitating accurate lung segmentation. To reduce the manual annotation burden and to train a robust lung segmentor that can be adapted to pathological lungs with hazy lung boundaries, an image-to-image translation module is employed to synthesize radiorealistic abnormal CXRs from the source of normal ones for data augmentation. The lung masks of synthetic abnormal CXRs are propagated from the segmentation results of their normal counterparts, and then serve as pseudo masks for robust segmentor training. In addition, we annotate 100 CXRs with lung masks on a more challenging NIH Chest X-ray dataset containing both posterioranterior and anteroposterior views for evaluation. Extensive experiments validate the robustness and effectiveness of the proposed framework. The code and data can be found from https://github.com/rsummers11/CADLab/tree/master/Lung{\\_}Segmentation{\\_}XLSor .},\n}"}, "submission_cdate": 1544638212663, "submission_tcdate": 1544638212663, "submission_tmdate": 1561397113755, "submission_ddate": null, "review_id": ["S1gZRQf87E", "Hkgvc8x0QE", "HJgBS0GkEV"], "review_url": ["https://openreview.net/forum?id=B1lpb10Ry4¬eId=S1gZRQf87E", "https://openreview.net/forum?id=B1lpb10Ry4¬eId=Hkgvc8x0QE", "https://openreview.net/forum?id=B1lpb10Ry4¬eId=HJgBS0GkEV"], "review_cdate": [1548260297124, 1548777103400, 1548852796924], "review_tcdate": [1548260297124, 1548777103400, 1548852796924], "review_tmdate": [1548856722026, 1548856682989, 1548856678371], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper26/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper26/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper26/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1lpb10Ry4", "B1lpb10Ry4", "B1lpb10Ry4"], "review_content": [{"pros": "- The authors transfer two recently published tools, MUNIT for data augmentation and Criss-Cross attention for semantic segmentation, from pure computer vision field to the medical imaging. Specifically they employ the tools to improve the pathological lung segmentation employing CXRs images.\n\n- The authors present an extensive evaluation employing public available datasets, in which the lung lesions are mild, and their own dataset with severe lesions. The obtained results for public available datasets are similar to the state-of-the-art approaches, and particularly better for highly damaged lungs. \n\n- Employment of generative models as MUNIT for data augmentation is quite new in the medical imaging field. \n\n- Both, Criss-cross attention and specially data augmentation with generative models could be easily extended and useful in similar segmentation problems. Particularly, if the authors make publicly available a clean and robust code after publication.\n\n\n", "cons": "- As I mentioned above, the employment of two novel techniques from the computer vision field must be welcome. However, the method has not been adapted for the presented problem. It seems like the authors have put two pieces of code together to subsequently employ their images. Computer Vision and medical images segmentation problems have a lot in common but as well several differences that should be reflected within the models. This issue could have been evaluated by publishing the code without the acceptance condition imposed by the authors.\n\nCriss-Cross Attention based Network for Lung Segmentation:\n - It is quite difficult to understand the section. The text and the figure 2 are insufficient (kind of disconnected even) in order to provide an adequate explanation for the work purpose, actually, it was a must for me to read the full original paper in order to understand the model. Besides, the figure is mostly the same that the one employed in the original work but is not explicitly cited. \nIn this section, The weight decay and the batch size have changed with respect to the original work, why? Difference between images? Convergence issues? Etc.\n\nData Augmentation via Abnormal Chest X-Ray Pairs Construction:\n- Again, the employment of MUNIT for data augmentation is a nice approach but, is there a way to guarantee the realistic deformation of all lungs? How much does it care?\n\nDatasets:\n - Is there a single CXR per subject for all datasets? \n- There is a lack on the datasets description: Voxel size/resolution, etc. \n\n\nQuantitative Results:\n- To perform a fairer and more interesting comparison, it would be recommendable to employ an adapted version of the U-Net with the criss-cross attention modules in addition to the classic model. \n- In general, there are conclusions, subject to discussion, mixed with the results, e.g: \u201cThis demonstrates that the proposed XLSor based on the criss-cross\u2026\u201d, \u201csuggesting the effectiveness of our data augmentation technique for lung segmentation...\u201d, etc.\n- The results for severe lesions are better for the proposed model. The overlapping measures reach, in average, the results obtained for mild lesions but, surprisingly, the AVD is much smaller for the difficult cases, how do you justify this? I suppose that the lung shape is much more complex for your dataset, so similar results between mild and severe lesion would be already a big achievement but such favorable differences are weird and point to an ad-hoc model. Please explain it. \n- The AVD has not units.\n\n\n\n\n", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "The paper is very well written and uses extensive experiments to back up the claims made in the paper. The main contribution of this work is a deep learning multi-organ segmentation approach for segmenting abnormal chest x-ray images. The authors address the problem of multi-organ segmentation in the scenario where expert segmented datasets for abnormal cases are generally not available. \nThe authors combine criss-cross attention networks (that provide computational speedup benefits compared with the standard attention methods) with multi-model unsupervised translation method to generate virtual abnormal C-xray datasets using the expert-segmented normal C-xray images. \nThe authors provide comparison of their method to Unet. Ablation tests showing the benefits of the criss-cross attention and the data augmentation are also provided. ", "cons": "While the results are very promising, the approach itself is somewhat incremental, making use of existing methods, with the exception of application of image translation using MUNIT in a new way to generate abnormal images. Also, MUNIT is an approach to model multi-modes in the data arising from different classes. Its unclear why this is the most suited approach for this work -- is this not a bit of an overkill? There aren't really that many stylistic variations when translating from normal to abnormal Chest X-rays. \n\nPerhaps including a very brief discussion on why such an approach was chosen would be helpful. ", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "This paper presented propose a lung segmentation framework for chest X-rays, including a criss-cross attention based segmentation network and a radiorealistic chest X-ray image synthesis process for data augmentation. Experiments were performed on multiples datasets. The proposed method sounds reasonable, and the manuscript is easy to follows.", "cons": "1) The main concern is that the experimental results seem to be not strong enough. For instance, the authors simply compared their method with U-Net, while there are many other deep learning methods for segmentation. \n\n2) Besides, there is no comparison between the proposed method and the method that does not use the attention module. \n\n3) In addition, as shown in Table 2, the U-Net_A4 achieves better results than U-Net_R and U-Net_R+A3, which suggesting that using only the constructed images is better than those real images. No explanation is given in the manuscript. ", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct", "special_issue": ["Special Issue Recommendation"]}], "comment_id": ["BkgxtFT6VV", "Skl9dEppVN", "HkxXAL3T4N", "Sygo9DkAVE"], "comment_cdate": [1549814136157, 1549812849611, 1549809355230, 1549821843431], "comment_tcdate": [1549814136157, 1549812849611, 1549809355230, 1549821843431], "comment_tmdate": [1555945981692, 1555945981432, 1555945981202, 1555945980943], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper26/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper26/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper26/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper26/AnonReviewer3", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Response to AnonReviewer3", "comment": "We thank the reviewer for the extensive comments as well as constructive suggestions. We will try to address you questions and suggestions below:\n\n- The issue could have been evaluated by publishing the code without the acceptance condition imposed by the authors.\nWe will clean up the code and release it soon.\n\n- The text and the figure 2 are insufficient.\nMore descriptions will be given in the revised version to make it more self-contained, and the original work will be explicitly cited in Figure 2.\n\n- The weight decay and the batch size have changed with respect to the original work, why?\nActually, the weight decay is 0.0005 in the released code of CCNet instead of 0.0001 reported in the original paper. In this work, we also set the weight decay as 0.0005 to reduce overfitting of the trained model. In the original work, 4 \u00d7 TITAN XP GPUs are used for training and batch size is 8. In this work, only one V100 GPU is used for training. So we set the batch size as 4 instead of 8 due to GPU memory limit.\n\n- Is there a way to guarantee the realistic deformation of all lungs? How much does it care?\nIn this work, the lung shapes will be slightly deformed in the generated abnormal CXRs sometimes, but we found the lung segmentation performance didn't change much when manually removing such deformed samples for training. Therefore, we use all generated samples to train the proposed segmentor without requiring any human effort. As a future work, we will use some specific constraints (e.g. lung masks) to guide the MUNIT model to produce more realistic abnormal CXRs where the lungs are not deformed. \n\n- Is there a single CXR per subject for all datasets? \nIn the original NIH Chest X-Ray dataset, there are some subjects who have multiple CXRs. But all CXRs belong to different subjects in the annotated subset.\n\n- There is a lack on the datasets description: Voxel size/resolution, etc. \nAll images are gray and have resolution 2048\u00d72048 in the JSRT, 4020\u00d74892 or 4892\u00d74020 in the Montgomery and 1024\u00d71024 in the NIH dataset. \n\n- It would be recommendable to employ an adapted version of the U-Net with the criss-cross attention (CCA) modules in addition to the classic model. \nWe have trained the U-Net model with CCA module (denoted U-Net_CCA) using two training settings (i.e. R and R + A4) and tested them on both public testing set and NIH dataset. \nThe results on the public testing set are given as following:\nMethod RECALL PRCISON DICE AVGDIST VOLSMTY\nUNet_R 0.976/0.02 0.968/0.03 0.972/0.02 0.198/0.56 0.988/0.02 \nUNet_CCA_R 0.976/0.02 0.970/0.03 0.972/0.02 0.191/0.54 0.988/0.02 \nUNet_R+A4 0.973/0.02 0.978/0.01 0.975/0.01 0.152/0.46 0.990/0.01 \nUNet_CCA_R+A4 0.975/0.02 0.977/0.01 0.975/0.01 0.130/0.33 0.990/0.01 \nThe results on the NIH dataset are given as following:\nMethod RECALL PRCISON DICE AVGDIST VOLSMTY\nUNet_R 0.938/0.07 0.761/0.20 0.823/0.16 5.231/9.02 0.869/0.15 \nUNet_CCA_R 0.929/0.07 0.804/0.20 0.842/0.14 4.782/8.05 0.895/0.14 \nUNet_R+A4 0.943/0.04 0.958/0.03 0.950/0.03 0.454/0.73 0.982/0.02 \nUNet_CCA_R+A4 0.956/0.03 0.969/0.02 0.962/0.02 0.262/0.54 0.985/0.02 \nFrom above results, we can see that the U-Net model with CCA achieves better performance than the one without CCA (especially, on the NIH dataset), suggesting that using CCA module enables the U-Net to learn better global contextual information of lung regions and extract more powerful discriminative features for performance improvement.\n\n- In general, there are conclusions, subject to discussion, mixed with the results.\nThanks! We will improve this in the camera-ready version.\n\n- The AVD is much smaller for the difficult cases, how do you justify this?\nAfter investigating the lung mask annotations, we found that the masks' contours match the lung boundaries better in the NIH dataset than the ones in the public test set. When the AVD is calculated based on contour points, it could be smaller on the NIH dataset than the one on the public test set for the powerful proposed method.\n\n- The AVD has not units.\nAll evaluation metrics are computed in pixel-level, so the unit of AVD is pixel. We will add the units in the revised version."}, {"title": "Response to AnonReviewer2", "comment": "We appreciate your thorough review and helpful suggestions. We will try to address you questions and suggestions below:\n\n- The approach is somewhat incremental.\n\nThis paper addresses a major problem of medical image analysis: \u201chow pathological lungs can be segmented from whole abnormal chest X-ray (CXR) images, without requiring manual annotations of abnormal lungs in training?\u201d. To achieve this, two novel techniques from computer vision field are employed to the medical imaging. Although the existing techniques are used and promising pathological lung segmentation results are produced, the entire proposed framework is applicable to other object segmentation problems in the medical imaging domain. Therefore, the proposed framework is worth being highlighted to the medical imaging.\n\n- Its unclear why MUNIT is the most suited approach for this work. A very brief discussion would be helpful.\n\nIn this work, given a normal CXR image, we hope to generate diverse abnormal images with different appearances or disease styles. To achieve this, a multimodal unsupervised one-to-many translation mapping should be learned to capture the full distribution of possible outputs rather than a deterministic one-to-one mapping without requiring paired training data. To our best knowledge, MUNIT is a novel state-of-the-art multimodal image-to-image translation approach, which is a suitable technique to address our concern. After being trained with normal and abnormal CXRs, MUNIT can output various generated abnormal CXRs from a given normal CXR and different random style codes (see examples in Figure 3). We will give a more comprehensive description and discussion of MUNIT in the revised version."}, {"title": "Response to AnonReviewer1", "comment": "Thanks for your helpful feedback and suggestions! We will try to address your comments below:\n\n- More comparisons with other deep learning methods and the proposed segmentor without CCA for segmentation. \n\nWe have extra trained two other state-of-the-art segmentation approaches (i.e. PSPNet[1] and DeepLabv3[2]) and the proposed segmentor without CCA module (denoted XLSor_noCCA) with two training settings (i.e. R and R + A4) and tested them on both public testing set and NIH dataset.\nThe results on the public testing set are given as following:\nMethod RECALL PRCISON DICE AVGDIST VOLSMTY\nXLSor_noCCA_R 0.973/0.02 0.978/0.02 0.975/0.01 0.151/0.53 0.991/0.01\nXLSor_R 0.973/0.02 0.979/0.02 0.976/0.01 0.149/0.51 0.992/0.01 \nPSPNet_R 0.968/0.03 0.966/0.03 0.966/0.02 0.267/0.95 0.987/0.01 \nDeepLabv3_R 0.975/0.02 0.977/0.02 0.976/0.01 0.150/0.53 0.992/0.01 \nXLSor_noCCA_R+A4 0.972/0.02 0.978/0.02 0.976/0.01 0.148/0.47 0.991/0.01 \nXLSor_R+A4 0.974/0.02 0.977/0.02 0.976/0.01 0.146/0.44 0.991/0.01 \nPSPNet_R+A4 0.972/0.02 0.976/0.02 0.974/0.01 0.194/0.63 0.990/0.01 \nDeepLabv3_R+A4 0.973/0.02 0.978/0.02 0.975/0.01 0.147/0.44 0.991/0.01 \nThe results on the NIH dataset are given as following:\nMethod RECALL PRCISON DICE AVGDIST VOLSMTY\nXLSor_noCCA_R 0.965/0.03 0.902/0.10 0.929/0.06 0.952/1.81 0.955/0.06 \nXLSor_R 0.966/0.02 0.927/0.09 0.943/0.05 0.669/1.64 0.966/0.05 \nPSPNet_R 0.926/0.14 0.736/0.23 0.799/0.20 8.992/21.32 0.849/0.18 \nDeepLabv3_R 0.960/0.03 0.926/0.09 0.934/0.06 0.699/1.71 0.962/0.06 \nXLSor_noCCA_R+A4 0.965/0.02 0.981/0.01 0.967/0.01 0.093/0.10 0.990/0.01 \nXLSor_R+A4 0.974/0.01 0.976/0.01 0.975/0.01 0.078/0.06 0.993/0.01 \nPSPNet_R+A4 0.941/0.08 0.955/0.02 0.945/0.05 0.492/0.85 0.962/0.04 \nDeepLabv3_R+A4 0.968/0.02 0.976/0.02 0.969/0.01 0.089/0.10 0.991/0.01\nFrom above results, we can see that 1) the proposed method gets the best performance on both test datasets, demonstrating the effectiveness of the proposed method for lung segmentation. 2) the proposed method with CCA achieves better performance than the one without CCA (especially, on the NIH dataset), suggesting that using CCA module can make the model learn the global contextual information of lung regions better and extract more powerful discriminative features for performance improvement.\n[1] Zhao, Hengshuang, et al. \"Pyramid scene parsing network.\" CVPR 2017.\n[2] Chen, Liang-Chieh, et al. \"Rethinking atrous convolution for semantic image segmentation.\" arXiv preprint arXiv:1706.05587 (2017).\n\n- No explanation about the U-Net_A4 achieves better results than U-Net_R and U-Net_R+A3 in Table 2. \n\nIn Table 2, for U-Net, the model U-Net_A4 gets better results except precision than others, suggesting that using only the generated images is better than those containing real images. A possible reason is that the data distribution of the NIH test set is very different from the real public training set, but similar with the augmented training sets. So when training U-Net using only augmented set, the model may be overfitting. But XLSor_R+A4 is better than XLSor_A4, demonstrating the effectiveness and generalizability of the proposed method."}, {"title": "Reviewer reply", "comment": "I would like to thank the authors their clarity and the extended experiments. \n\nClearly, the work should be present at MIDL2019."}], "comment_replyto": ["S1gZRQf87E", "Hkgvc8x0QE", "HJgBS0GkEV", "BkgxtFT6VV"], "comment_url": ["https://openreview.net/forum?id=B1lpb10Ry4¬eId=BkgxtFT6VV", "https://openreview.net/forum?id=B1lpb10Ry4¬eId=Skl9dEppVN", "https://openreview.net/forum?id=B1lpb10Ry4¬eId=HkxXAL3T4N", "https://openreview.net/forum?id=B1lpb10Ry4¬eId=Sygo9DkAVE"], "meta_review_cdate": 1551356580806, "meta_review_tcdate": 1551356580806, "meta_review_tmdate": 1551881978872, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "The manuscript proposes a lung segmentation framework for chest X-rays. The authors combine criss-cross attention networks (speedup benefits) with a multi-model unsupervised translation method (MUNIT) to generate virtual abnormal C-xray datasets using the expert-segmented normal C-xray images.\n\nThe paper is well written in general. The application is very relevant and the results, promising. The approach is not completely original as it results from the translation of two recently published tools from the pure computer vision field to the medical imaging area. \n\nPros:\n- The authors address the problem in the scenario where expert segmented datasets for abnormal cases are generally not available. \n- After being trained with normal and abnormal CXRs, MUNIT can output various generated abnormal CXRs from a given normal CXR and different random style codes. \n-The method performance is compared with other deep learning methods including the proposed segmentor without criss-cross attention for segmentation. \n- The authors present an extensive evaluation employing public available datasets, in which the lung lesions are mild, and their own dataset with severe lesions. The obtained results for public available datasets are similar to the state-of-the-art approaches, and particularly better for highly damaged lungs. \n- Both Criss-cross attention and specially, data augmentation with generative models could be easily extended and useful in similar segmentation problems. The authors promise to clean up the code and release it soon. \n\nCons:\n- The section 'Criss-Cross Attention Based Network for Lung Segmentation' is quite difficult to understand. The authors will extend the description in the camera ready manuscript to make it more self-contained. \n- Some of the deformations obtained are not realistic. From the authors point of view, the lung segmentation performance does not degrade much but no number are reported.\n- The results for severe lesions are better for the proposed model. The overlapping measures reach, in average, the results obtained for mild lesions but, surprisingly, the Average Volume Dissimilarity is much smaller for the difficult cases. Revising the segmentations, the authors found that the mask's contours match the lung boundaries better in the lung damaged dataset than in the public test set but they did not reported a reason for it. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1lpb10Ry4¬eId=H1xpjfIr8N"], "decision": "Accept"}