{"forum": "HJe6f0BexN", "submission_url": "https://openreview.net/forum?id=HJe6f0BexN", "submission_content": {"title": "Learning with Multitask Adversaries using Weakly Labelled Data for Semantic Segmentation in Retinal Images", "authors": ["Oindrila Saha", "Rachana Sathish", "Debdoot Sheet"], "authorids": ["oindrila_saha13@iitkgp.ac.in", "rachana.sathish@iitkgp.ac.in", "debdoot@ee.iitkgp.ac.in"], "keywords": ["Adversarial learning", "convolutional neural networks", "multitask learning", "semantic segmentation", "retinal image analysis"], "abstract": "A prime challenge in building data driven inference models is the unavailability of statistically significant amount of labelled data. Since datasets are typically designed for a specific purpose, and accordingly are weakly labelled for only a single class instead of being exhaustively annotated. Despite there being multiple datasets which cumulatively represents a large corpus, their weak labelling poses challenge for direct use. In case of retinal images, they have inspired development of data driven learning based algorithms for segmenting anatomical landmarks like vessels and optic disc as well as pathologies like microaneurysms, hemorrhages, hard exudates and soft exudates. The aspiration is to learn to segment all such classes using only a single fully convolutional neural network (FCN), while the challenge being that there is no single training dataset with all classes annotated. We solve this problem by training a single network using separate weakly labelled datasets. Essentially we use an adversarial learning approach in addition to the classically employed objective of distortion loss minimization for semantic segmentation using FCN, where the objectives of discriminators are to learn to (a) predict which of the classes are actually present in the input fundus image, and (b) distinguish between manual annotations vs. segmented results for each of the classes. The first discriminator works to enforce the network to segment those classes which are present in the fundus image although may not have been annotated i.e. all retinal images have vessels while pathology datasets may not have annotated them in the dataset. The second discriminator contributes to making the segmentation result as realistic as possible. We experimentally demonstrate using weakly labelled datasets of DRIVE containing only annotations of vessels and IDRiD containing annotations for lesions and optic disc. Our method using a single FCN achieves competitive results over prior art for either vessel or optic disk or pathology segmentation on these datasets.", "pdf": "/pdf/495ab290d4719874748ef79b034dd66c068760f3.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "saha|learning_with_multitask_adversaries_using_weakly_labelled_data_for_semantic_segmentation_in_retinal_images", "_bibtex": "@inproceedings{saha:MIDLFull2019a,\ntitle={Learning with Multitask Adversaries using Weakly Labelled Data for Semantic Segmentation in Retinal Images},\nauthor={Saha, Oindrila and Sathish, Rachana and Sheet, Debdoot},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=HJe6f0BexN},\nabstract={A prime challenge in building data driven inference models is the unavailability of statistically significant amount of labelled data. Since datasets are typically designed for a specific purpose, and accordingly are weakly labelled for only a single class instead of being exhaustively annotated. Despite there being multiple datasets which cumulatively represents a large corpus, their weak labelling poses challenge for direct use. In case of retinal images, they have inspired development of data driven learning based algorithms for segmenting anatomical landmarks like vessels and optic disc as well as pathologies like microaneurysms, hemorrhages, hard exudates and soft exudates. The aspiration is to learn to segment all such classes using only a single fully convolutional neural network (FCN), while the challenge being that there is no single training dataset with all classes annotated. We solve this problem by training a single network using separate weakly labelled datasets. Essentially we use an adversarial learning approach in addition to the classically employed objective of distortion loss minimization for semantic segmentation using FCN, where the objectives of discriminators are to learn to (a) predict which of the classes are actually present in the input fundus image, and (b) distinguish between manual annotations vs. segmented results for each of the classes. The first discriminator works to enforce the network to segment those classes which are present in the fundus image although may not have been annotated i.e. all retinal images have vessels while pathology datasets may not have annotated them in the dataset. The second discriminator contributes to making the segmentation result as realistic as possible. We experimentally demonstrate using weakly labelled datasets of DRIVE containing only annotations of vessels and IDRiD containing annotations for lesions and optic disc. Our method using a single FCN achieves competitive results over prior art for either vessel or optic disk or pathology segmentation on these datasets.},\n}"}, "submission_cdate": 1544736276655, "submission_tcdate": 1544736276655, "submission_tmdate": 1561398137485, "submission_ddate": null, "review_id": ["Hkx4vtbsQN", "HJgc3mq3z4", "B1ej80L2QN"], "review_url": ["https://openreview.net/forum?id=HJe6f0BexN¬eId=Hkx4vtbsQN", "https://openreview.net/forum?id=HJe6f0BexN¬eId=HJgc3mq3z4", "https://openreview.net/forum?id=HJe6f0BexN¬eId=B1ej80L2QN"], "review_cdate": [1548585308444, 1547637682323, 1548672595034], "review_tcdate": [1548585308444, 1547637682323, 1548672595034], "review_tmdate": [1550062747958, 1549880046625, 1548856753342], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper102/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper102/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper102/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["HJe6f0BexN", "HJe6f0BexN", "HJe6f0BexN"], "review_content": [{"pros": "The paper is relatively concise and clearly written. It describes and interesting methodology for handling image dataset of the same imagining modality and target, but with different annotations. The methodology is based on adversarial training of deep neural networks for segmentation. While the application domain (segmentation of different target in retina fundus images) is not terribly relevant as many of these applications are well-addressed, the wider context of the manuscript (dataset with disparate annotations) is relevant.", "cons": "The evaluation was not done properly or in a very transparent manner. \n\nFor the vessel segmentation task, the author state that: \u201cSE, SP and ACC are reported as the maximum value over thresholds.\u201d This is poor practice and leads to overoptimistic estimates of the performance of the method. Performance measures such as specificity, sensitivity and accuracy are highly dependent on the selection of the operating point. The selection of the optimal operating point should be part of the model design and ideally selected based on a validation set. \n\nFor the tasks related to the IDRiD challenge, the authors compare their performance to other methods submitted for the evaluation. However, the challenge seems to be closed for new submission and I assume that the performance for these tasks was evaluated by the authors themselves, and not by the challenge organisers. In my opinion, this disqualifies direct comparison with the official leaderboard of the challenge, unless the authors release the complete code based that reproduces the results described in the paper. \n\nFurthermore, the official leaderboard features results submitted by one of the authors of the paper. The results for the optic disc segmentation are similar as reported in the paper, however, all other results are significantly worse. Was this the same method as described in this paper? I", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "- The authors present a deep learning method for fundus image analysis based on a fully convolutional neural network architecture trained with an adversarial loss.\n\n- The method allows to detect a series of relevant anatomical/pathological structures in fundus pictures (such as the retinal vessels, the optic disc, hemorrhages, microaneurysms and soft/hard exudates). This is important when processing these images, where anatomical and pathological structures usually share similar visual properties and lead to false positive detections (e.g. red lesions and vessels, or bright lesions and the optic disc).\n\n- The adversarial loss allows to leverage complementary data sets that do not have all the regions of interest segmented. Thus, it is not necessary to have all the classes annotated in all the images but to have the labels at least in some of them.\n\n- The contribution is original in the sense that complementing data sets is a really challenging task, difficult to address with current available solutions. The strategy proposed to tackle this issue is not novel as adversarial losses have been used before for image segmentations. However, it is the first time that it is applied for complementing data sets and have some interesting modifications that certainly ensures novelty in the proposal.\n\n- The paper is well written and organized, with minor details to address in this matter (see CONS).", "cons": "- The clear contribution of the article is, in my opinion, the ability to exploit complementary information from different data sets. Taking this into account, I would suggest the authors to incorporate at least one paragraph in Related works (Section 2) describing the current existing approaches to do that.\n\n- It is not clear from the explanation in Section 3.1 how the authors deal with the differences in resolution between DRIVE and IDRID data sets. It would be interesting to know that aspect, as it is crucial to allow the network to learn to \"transfer\" its own ability for detecting a new region from one data set to another.\n\n- The segmentation architecture does not use batch normalization. Is there a reason for not using it?\n\n- The vessel segmentation performance is evaluated on the DRIVE data set. Despite the fact that this set has been the standard for evaluating blood vessel segmentation algorithms since 2004, the resolution of the images is extremelly different from the current ones. There are other existing data sets such as HRF (https://www5.cs.fau.de/research/data/fundus-images/), CHASEDB1 (https://blogs.kingston.ac.uk/retinal/chasedb1/) and DR HAGIS (https://personalpages.manchester.ac.uk/staff/niall.p.mcloughlin/DRHAGIS_database.htm) with higher resolution images that are more representative of current imaging devices. I would suggest to incorporate results on at least one of these data sets to better understand the behavior of the algorithm on these images.\n\n- The area under the ROC curve is not a proper metric for evaluating a vessel segmentation algorithm due to the class imbalance between the TP and TN classes (vessels vs. background ratio is around 12% in fundus pictures). I would suggest to include the F1-score and the area under the Precision/Recall curve, instead, which have been used already in other studies (see [1] and [2], for example, or Orlando et al. 2017 in the submitted draft).\n\n- The method in [2] should be included in the comparison of vessel segmentation algorithms. To the best of my knowledge, it has the highest performance in the DRIVE data set compared to several other techniques. It would also be interesting to analyze the differences in a qualitative way, as in Fig. 3 (b). The authors of [2] provided a website with all the results on the DRIVE database (http://www.vision.ee.ethz.ch/~cvlsegmentation/driu/), so their segmentations could be taken from there.\n\n- The results for vessel segmentation in IDRID images do not look as accurate as those in the DRIVE data set. However, since IDRID does not have vessel annotations, it is not possible to quantify the performance there. It would be interesting to simulate such an experiment by taking an additional data set with vessel annotations (e.g., some of those that I suggested before, HRF, CHASEDB1 or DR HAGIS) and evaluate the performance there, without using any of their images for training. That would be equivalent to assume that the new data set(s) does (do) not contain the annotations, and will allow to quantify the performance there. Since the HRF data set contains images from normal, glaucomatous and diabetic retinopathy patients, I would suggest to use that one. A similar experiment can be made using other data sets with red/bright lesions (e.g. e-ophtha, http://www.adcis.net/es/Descargar-Software-Base-De-Datos-Terceros/E-Ophtha.html) or optic disc annotations (e.g. REFUGE database, https://refuge.grand-challenge.org). I think this is a key experiment, really necessary to validate if the method is performing well or not. I would certainly accept the paper is this experiment were included and the results were convincing.\n\n- It is not clear if the values for the existing methods in Table 2 correspond to the winning teams of the IDRID challenge. Please, clarify that point in the text.\n\n- The abstract should be improved. The first 10 lines contains too much wording for a statement that should be much easier to explain. I would suggest reorganizing these first line by following something like: (i) Despite the fact that there are several available data sets of fundus pictures, none of them contains labels for all the structures of interest for retinal image analysis, either anatomical or pathological. (ii) Learning to leverage the information of complementary data sets is a challenging task. (iii) Explanation of the method...\n\n\n\n[1] Zhao, Yitian, et al. \"Automated Vessel Segmentation Using Infinite Perimeter Active Contour Model with Hybrid Region Information with Application to Retinal Images.\" IEEE Trans. Med. Imaging 34.9 (2015): 1797-1807.\n\n[2] Maninis, Kevis-Kokitsi, et al. \"Deep retinal image understanding.\" International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2016.", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "The paper presents a deep learning based approach to mitigate the problem of weakly labelled data in fundus dataset. The authors combine labels from different datasets and perform segmentation which are further discriminated between manually labeled vs automatic segmentation. In addition, they propose to add another discriminator which will provides the score for presence of different class in the datasets. \n\n1) The paper gives a strong motivation towards bridging the gap between sparse availability of different class type annotations and requirement of full annotation of different class or at least the classes that are adherently present in that image/dataset\n2) The paper proposes a novel way of guarenteeing semantic segmentation and learning from multiple datasets\n3) Interesting approach\n4) Well written paper with few typos to fix ", "cons": "1)\tReference to abstract: The authors write that no semantic segmentation is used. However, I can see that they have done semantic segmentation and discriminators are used only to identify between truthfulness of the available manual vs segmented maps. Thus, in abstract, \u201cwe use an adversarial learning approach over a semantic segmentation\u201d need to be justified. Does it mean that you are using a generative adversarial network where your generator is a semantic segmentation module which is rectified via discriminators\u2019 decision? If so please include it to clarify more.\n2)\tPage 2, second paragraph, \u201cOne of the major\u2026\u201d: segmenation of all parts in fundus image may not be feasible especially when they have proliferative exudates present in them. So, please correct it with \u201cdifferent classes that are adherently available..\u201d\n3)\tPlease remove words that define functions, e.g., ChannelShuffler(), LeakyRelu() \u2026 check for all those in the entire paper\n4)\tIn Table 2, what might be the region of hard exudates not performing well especially when we see at the results for other class types. Is it due to lack of ground truth labels in your data? There are other available datasets for these may be including those can improve this result?\n", "rating": "4: strong accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct", "special_issue": ["Special Issue Recommendation"], "oral_presentation": ["Consider for oral presentation"]}], "comment_id": ["SJlnsYdnVN", "HJgPG9O244", "Hyg3K5_h4V", "rJlAKEd34V", "SJxN1j604E", "S1xVjEdyrN"], "comment_cdate": [1549728163554, 1549728271228, 1549728387661, 1549726853561, 1549880027904, 1549923483781], "comment_tcdate": [1549728163554, 1549728271228, 1549728387661, 1549726853561, 1549880027904, 1549923483781], "comment_tmdate": [1555945995247, 1555945994990, 1555945994771, 1555945974572, 1555945969342, 1555945965674], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper102/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper102/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper102/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper102/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper102/AnonReviewer2", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper102/AnonReviewer1", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Response to Reviewer2 - Part 1/2", "comment": "We thank the reviewer for the detailed comments. We appreciate the reviewer\u2019s concern on citing related prior art. However to the best of our knowledge, we have not been able to find any related prior art for solving this problem of integrating knowledge of multiple classes across partially labelled datasets representative of different independent clinical cohorts. As a result of its absence and having a requirement for such a solution, we decided to craft a solution employing multi-task adversarial learning approach presented here.\n\n-In reference to Sec. 3.1, the IDRiD images of size 4288 x 2848 px were downsampled to 880 x 584 px while preserving the aspect ratio. DRIVE images were zero padded to a size of 880x584 px. This was performed on account of the different sizes of images in these datasets.\n\n-Batch size of 1 was used, as memory requirement per image during training was ~8GB on the GPU. Thus batchnorm would not be functional and hence was removed.\n\n-We thank the reviewer for the suggestion on this interesting experiment. We have performed an inference only on the HRF images using our existing model without re-training it with any images from the HRF dataset. The results are presented in Table R1. It it worthwhile to note that despite of the network not having had seen HRF images during training, it performs almost equivocal with slightly higher sensitivity and accuracy as compared to the competitive prior art for the task of vessel segmentation.\n\nTable R1: Performance evaluation on HRF dataset [1]\n+---------------------------+----------+----------+-----------+-----------+\n| Method | SE | SP | ACC | F1 |\n+---------------------------+----------+----------+-----------+-----------+\n| Proposed | 0.7891 | 0.9642 | 0.9610 | 0.7552 |\n+---------------------------+----------+----------+-----------+-----------+\n|Odstrcilik et.al. [2] | 0.7741 | 0.9669 | 0.9493 | - |\n+---------------------------+----------+----------+-----------+-----------+\n[1] https://www5.cs.fau.de/research/dat/fundus-images/\n[2] J. Odstrcilik et al., \"Retinal vessel segmentation by improved matched filtering: evaluation on a new high-resolution fundus image database,\" in IET Image Processing, vol. 7, no. 4, pp. 373-383, June 2013.\nQualitative results of these experiments can be found here (no login required, view only) https://drive.google.com/file/d/1ML1zEAVBhBZl4paDnDe35YxUKmB_Q1hP/\n\n-We have measured the F-score on the DRIVE dataset for our method. We have obtained 0.7925 as compared to 0.7857 by Orlando (2017) [3], 0.782 by Zhao (2015) [4] and 0.822 by DRIU (Maninis, 2016) [5]. These F-scores would be included in modification to Table 1 in the manuscript.\n\n[3] J. I. Orlando, E. Prokofyeva, and M. B. Blaschko. A discriminatively trained fully connected conditional random field model for blood vessel segmentation in fundus images. IEEE Transactions on Biomedical Engineering, 64(1):16\u201327, Jan 2017.\n[4] Zhao, Yitian, et al. \"Automated Vessel Segmentation Using Infinite Perimeter Active Contour Model with Hybrid Region Information with Application to Retinal Images.\" IEEE Trans. Med. Imaging 34.9 (2015): 1797-1807.\n[5] Maninis, Kevis-Kokitsi, et al. \"Deep retinal image understanding.\" International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2016.\n\n-We thank the reviewer for sensitizing us about this paper, and we will add the paper and report F-score in Table 1 in the manuscript.\nManinis, Kevis-Kokitsi, et al. \"Deep retinal image understanding.\" International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2016.\n"}, {"title": "Response to Reviewer2 - Part 2/2 ", "comment": "-We thank the reviewer for suggesting on this interesting experiment. Similar to our approach in response to pt. 3 above, we employ only inferencing on e-optha for exudates and microaneurysms (Table R2), and REFUGE for optic disk (Table R3). \nTable R2: Area under Precision-Recall curve on EOPTHA dataset \n+------------------+---------+\n| Dataset | AUPR |\n+------------------+---------+\n| eoptha-EX | 0.8235 |\n+-----------------+----------+\n| eoptha-MA | 0.2500 |\n+-----------------+----------+\n\nTable R3: Performance evaluation on REFUGE dataset \n+---------------------------------+-----------+-----------+\n| Method | F1 | Jaccard |\n+---------------------------------+-----------+-----------+\n| Proposed | 0.9286 | 0.9202 |\n+---------------------------------+-----------+-----------+\n| REFUGE leaderboard[6]| 0.958 | - |\n+---------------------------------+-----------+-----------+\n[6]https://refuge.grand-challenge.org/Results-ValidationSet/\n\nThe area under the Precision-Recall curve for microaneurysms for e-optha dataset is low as reported in Table R2. This is on account of the size of the lesion being reduced to a few pixels when the images are downsampled to 584x880 to match the input size to the network. Also note that our model was not re-trained on any of these datasets, yet it adapted well to the unseen data and produced comparable results.\n\nQualitative results of these experiments can be found here (no login required, view only) https://drive.google.com/file/d/1ML1zEAVBhBZl4paDnDe35YxUKmB_Q1hP/\n\n-The values for the existing method reported in Table 2 in the manuscript corresponds to that of the best performing teams in the IDRiD challenge. We would improve its clarity in the manuscript.\n\n-We thank the reviewer for the suggestion on this restructuring of the abstract. However, it may be noted that the approach presented here is independent of the dataset or modality being investigated. While we demonstrate it with retinal images given the abundance of weakly labelled small sized datasets, it can also be extended to cases of lesion and anatomy segmentation in Brain MRI and CT scans, or natural scene image and traffic image semantic segmentation. In order to preserve conveying this broader sense of our approach, we prefer to retain the first few lines of the abstract. Realizing that the following sentence appears complex \u201cAs in case of \u2026 classes annotated\u201d, we rephrase it as\n\t \u201cIn case of retinal images, they have inspired development of data driven learning based algorithms for segmenting anatomical landmarks like vessels and optic disc as well as pathologies like microaneurysms, hemorrhages, hard exudates and soft exudates. The aspiration is to learn to segment all such classes using only a single fully convolutional neural network (FCN), while the challenge being that there is no single training dataset with all classes annotated.\u201d"}, {"title": "Response to Reviewer 3", "comment": "We thank the reviewer for the insightful observations and suggestions for improvement. We especially thank the appreciation of the core idea in this paper, of learning attributes on common imaging modality using multiple partially labelled datasets, which originate out of different cohorts of studies. The following are clarifications to some of the concerns\n\n-We have rephrased the sentence \u201cwe use an adversarial learning approach over a semantic segmentation\u201d, on account of the confusion to readers. \n\u201cwe use an adversarial learning approach in addition to the classically employed objective of distortion loss minimization for semantic segmentation\u201d\n\n-We have rephrased the sentence \u201cOne of the major challenges in this task is the absence of a single dataset which contain exhaustive pixel level semantic annotation of all parts of the retina. There are several datasets available containing disparate annotations.\u201d to be read as\n\u201cOne of the major challenges in this task is the absence of a single dataset which contains exhaustive pixel level semantic annotation of all parts of the retina. However this task of creating such a dataset being a highly taxing job, we propose to employ multiple readily available datasets which have reliable annotations for some of the classes, with no necessity of any single dataset having all given classes annotated and there also not being common classes annotated across different datasets.\u201d\n\n-We are math-typecasting the words denoting functions throughout the paper.\n\n-It can be visually observed that hard exudates (EX) have a wide range of visual appearance variability. Typically they can be as small as 3x3 px. to as large as 100x100 px. which poses a representation learning challenge for the deep network. While smaller sized ones are abundantly available, the large sized ones are rare and with their non-conformity to some common geometry, the representation learning challenge rises. Complementing with more datasets on exudates would be one solution to mitigate around the performance. While our method has 2.2% lower performance than the leader board best on IDRiD, it is also worth noticing that the performance for hard exudates is highest compared to all other classes of pathology, and is just below the anatomical structure class optic disc. We would consider later on adding in datasets like the e-optha during training to improve upon the exudate segmentation following the philosophy of multiple dataset integration as proposed in this paper.\n"}, {"title": "Response to Reviewer 1", "comment": "We thank the reviewer for the valuable suggestions which have helped us provide additional clarification on the strength of this approach.\n\nThe statement \u201cSE, SP and ACC are reported as the maximum value over thresholds\u201d indicates the following process. The probability map obtained as an output from the model is thresholded at varying levels in [0-1]. Corresponding to each threshold, the SE, SP and ACC values are measured for each class calculated in the test set. The threshold for a given class at which the maximum value of these measured metrics are obtained is set as the inferencing threshold for the corresponding class and to be used during inferencing. The SE, SP and ACC at this inference threshold is reported in the Table 1 in the Manuscript. This variation of this threshold results in obtaining the ROC characteristics and the AUC is hence calculated.\n\nWith regard to comparison with the IDRiD challenge, the organisers have released the test annotations after closing the submissions and we have used the same for reporting performance in this paper ( https://ieee-dataport.org/open-access/indian-diabetic-retinopathy-image-dataset-idrid ). The evaluation metrics used by the organizers were the standard area under precision and recall curve, and Jaccard index. We have evaluated the proposed approach using the same metrics. The method we had used in the ISBI 2018 IDRiD challenge is different from the one presented in this paper and thus the difference in performance. In the earlier approach we had employed learning with only distortion loss for semantic segmentation, while multi-dataset integration concepts with adversarial losses were not employed then. Also, the semantic segmentation network used was different. Details of the method reported at the ISBI 2018 IDRiD challenge can be found here : https://arxiv.org/abs/1902.03122\n We further confirmed that the evaluation script that we used for reporting yielded the same performance measures as reported on the challenge leaderboard for our original submission to the challenge. \n\nThe code for trying out inference on a new dataset with the proposed method can be found here. The trained model is shared openly (without login requirements) on a cloud drive to be accessed for this inferencing.\nhttps://github.com/oindrilasaha/multitask-retinal-segmentation"}, {"title": "Good results, comments addressed, new version of the draft missing", "comment": "I appreciate that the authors performed all the experiments necessary to address my comments. I can now see that the method performs decently even in data sets that differ significantly from those used for training. This is a major achievement and I think that the paper should be accepted provided that the authors incorporate the modifications in the main draft."}, {"title": "Some comments have been address", "comment": "I would like to thank the authors for responding to my comments.\n\nRegarding the first remark, it seems that I have misunderstood the text and I agree with the author's response.\n\nIt is still my opinion that a direct comparison to results from the challenge leaderboard does not constitute a fair comparison. This is somewhat mitigated by the public release of the model, however, I would like to point out that the linked repository only contains a trained model and not code that can be used to reproduce the experiments in the papers. \n\n"}], "comment_replyto": ["HJgc3mq3z4", "SJlnsYdnVN", "B1ej80L2QN", "Hkx4vtbsQN", "HJgPG9O244", "rJlAKEd34V"], "comment_url": ["https://openreview.net/forum?id=HJe6f0BexN¬eId=SJlnsYdnVN", "https://openreview.net/forum?id=HJe6f0BexN¬eId=HJgPG9O244", "https://openreview.net/forum?id=HJe6f0BexN¬eId=Hyg3K5_h4V", "https://openreview.net/forum?id=HJe6f0BexN¬eId=rJlAKEd34V", "https://openreview.net/forum?id=HJe6f0BexN¬eId=SJxN1j604E", "https://openreview.net/forum?id=HJe6f0BexN¬eId=S1xVjEdyrN"], "meta_review_cdate": 1551356592089, "meta_review_tcdate": 1551356592089, "meta_review_tmdate": 1551881977382, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "While there are often different annotated datasets from the same image modality available, they often include annotations of different classes, covering different regions of the anatomy that is shown in the images. Datasets that include annotations of all regions of interest are rare and costly to acquire. The authors propose a method to address this challenging problem by proposing to learn semantic segmentation models from multiple datasets that might have only different parts of the classes annotated. An adversarial loss is proposed that allows the model to leverage complementary datasets while not requiring to have all regions of interest annotated in all of them.\n\nAll reviewers agree that the paper is interesting and addresses an important challenge in medical imaging. The proposed solution could potentially have a high impact to the field if proving to be generalizable to other datasets in future research. \n\nThe major concerns of the reviewers have been addressed by the authors\u2019 rebuttal. The criticism about the comparison to the challenge leaderboard is unproblematic. In my opinion, the comparison is valid and adds value to the paper.\n", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=HJe6f0BexN¬eId=r1gdnMUHL4"], "decision": "Accept"}