AMSR / conferences_raw /midl20 /MIDL.io_2020_Conference_Cy2fhiE_ql.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
19.3 kB
{"forum": "Cy2fhiE_ql", "submission_url": "https://openreview.net/forum?id=QXpeU5Cb1W", "submission_content": {"track": "full conference paper", "keywords": ["Karyotyping test", "Karyotype", "Chromosome", "Metric learning", "Proxy", "Deep learning"], "abstract": "In karyotyping, the classification of chromosomes is a tedious, complicated, and time-consuming process. It requires extremely careful analysis of chromosomes by well-trained cytogeneticists. To assist cytogeneticists in karyotyping, we introduce Proxy-ResNeXt-CBAM which is a metric learning based network using proxies with a convolutional block attention module (CBAM) designed for chromosome classification. RexNeXt-50 is used as a backbone network. To apply metric learning, the fully connected linear layer of the backbone network (ResNeXt-50) is removed and is replaced with CBAM. The similarity between embeddings, which are the outputs of the metric learning network, and proxies are measured for network training.\nProxy-ResNeXt-CBAM is validated on a public chromosome image dataset, and it achieves an accuracy of 95.86%, a precision of 95.87%, a recall of 95.9%, and an F-1 score of 95.79%. Proxy-ResNeXt-CBAM which is the metric learning network using proxies outperforms the baseline networks. In addition, the results of our embedding analysis demonstrate the effectiveness of using proxies in metric learning for optimizing deep convolutional neural networks. As the embedding analysis results show, Proxy-ResNeXt-CBAM obtains a 94.78% Recall@1 in image retrieval, and the embeddings of each chromosome are well clustered according to their similarity. ", "authors": ["Hwejin Jung", "Bogyu Park", "Seungwoo Hyun", "Hanwoong Kim", "Jinah Lee", "Junseok Seo", "Sunyoung Koo", "Mina Lee"], "authorids": ["hwejinjung@doai.ai", "bogyupark@doai.ai", "seungwoohyun@doai.ai", "hanwoongkim@doai.ai", "jinahlee@doai.ai", "junseokseo@doai.ai", "sykoo@gclabs.co.kr", "mnlee@gclabs.co.kr"], "pdf": "/pdf/2c1d0b8ec427ad077febc01591fdac3d9650d4cf.pdf", "paper_type": "methodological development", "title": "Deep Metric Learning Network using Proxies for Chromosome Classification in Karyotyping Test", "paperhash": "jung|deep_metric_learning_network_using_proxies_for_chromosome_classification_in_karyotyping_test", "_bibtex": "@misc{\njung2020deep,\ntitle={Deep Metric Learning Network using Proxies for Chromosome Classification in Karyotyping Test},\nauthor={Hwejin Jung and Bogyu Park and Seungwoo Hyun and Hanwoong Kim and Jinah Lee and Junseok Seo and Sunyoung Koo and Mina Lee},\nyear={2020},\nurl={https://openreview.net/forum?id=QXpeU5Cb1W}\n}"}, "submission_cdate": 1580309845936, "submission_tcdate": 1580309845936, "submission_tmdate": 1587172147961, "submission_ddate": null, "review_id": ["ZyoSv_XpK-", "pfLGzPRcCA", "bKBz9F-qgm"], "review_url": ["https://openreview.net/forum?id=QXpeU5Cb1W&noteId=ZyoSv_XpK-", "https://openreview.net/forum?id=QXpeU5Cb1W&noteId=pfLGzPRcCA", "https://openreview.net/forum?id=QXpeU5Cb1W&noteId=bKBz9F-qgm"], "review_cdate": [1584142069800, 1584040401282, 1583644646753], "review_tcdate": [1584142069800, 1584040401282, 1583644646753], "review_tmdate": [1585229830977, 1585229830196, 1585229829693], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2020/Conference/Paper331/AnonReviewer2"], ["MIDL.io/2020/Conference/Paper331/AnonReviewer4"], ["MIDL.io/2020/Conference/Paper331/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["Cy2fhiE_ql", "Cy2fhiE_ql", "Cy2fhiE_ql"], "review_content": [{"title": "Review", "paper_type": "validation/application paper", "summary": "the authors use a metric learning approach for chromosome classification task. They use the Proxy Ranking loss to train the embedding function. They augment the embedding function with a convolutional block attention module (CBAM), which uses a combination of channel attention as well as spatial attention modules. \n\nThey compare their method on other state-of-the-art methods on that dataset and perform some ablation study. ", "strengths": "The authors compare their method with other deep learning methods and also perform some ablation study especially on different components of the model. \n\nTable 2 explaining the experimental setup is quite helpful. \n", "weaknesses": "1- The motivation of the paper: In the abstract, it is noted: \"In addition, the results of our embedding analysis\ndemonstrate the effectiveness of using proxies in metric learning for optimizing deep convolutional neural networks.\" \nThis puts the emphasis on the proxy-based metric learning approaches. However, there is no ablation study or comparison with other metric learning approaches. \n2- There is no good motivation for why CBAM layer is used and what it does. The only explanation is \". CBAM sequentially infers\ntwo separate attention maps. To adaptively refine attention maps, both attention maps are multiplied to a input feature map.\" I assume the authors are referring to the channel-wise and spatial wise attention modules. Even so, an intuition behind using CBAM in a metric learning embedding function is unclear. \n3 - Better explanation of objective functions and the whole training procedure is needed. \n\n", "questions_to_address_in_the_rebuttal": "How is the class weights p_y set?\n\nWhen doing the ablation study of using and not using CBAM do the two models have the same number of parameters?\n\n\n", "rating": "2: Weak reject", "justification_of_rating": "While I think this is an interesting paper, I dont think it has enough contribution to be accepted at MIDL. I think its more suited for a workshop. \nI would recommend the authors to provide more intuition or justification for why the attention module is useful. ", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct", "recommendation": [], "special_issue": "no"}, {"title": "The justification of the method is unclear", "paper_type": "methodological development", "summary": "This paper proposes a metric learning based model for chromosome karyotyping. The model learns a proxy embedding for each class, and uses cosine similarity to classify new inputs based on the distance between the input embedding and each proxy embedding.\nThe authors use a ResNeXt with CBAM to compute image embeddings, which are then compared to the class proxies.", "strengths": "\n+ The paper achieves state of the art results on a public chromosome classification dataset.\n+ Improves previous results by a significat margin.\n+ The authors show an analysis of the image embeddings computed by their model.", "weaknesses": "\nMy main concern with this paper is that most of the modelling decisions are barely justified.\n1. The authors use CBAM \"to obtain adaptive embedding vectors\". What exactly does that mean? Adaptive to what? Why does it help the model to solve the task?\n2. It is unclear why the authors approach the chromosome classification task as a metric learning problem. The paper says that \"The main advantage of metric learning is that it can exploit the semantic similarity of objects to regularize a network\" but that is not explained or backed with experiments.\n3. As mention in the paper, face verification and image retrieval are successful applications of metric learning. However, these tasks can't be approached as a classification problem, as opposed to chromosome classification. What is the reason for a metric learning approach to be better than a classification model if the task can easily be framed as a classification problem?\n4. Learning the proxies and applying softmax on the dot product between each proxy and each sample embedding is like learning a classifier on the embeddings, where the weights of the classifier are given by the proxies. Then, does the improvement come from the sampling strategy mentioned in the paper? ", "questions_to_address_in_the_rebuttal": "* What is the justification of using CBAM in this model?\n* Why approach the task with metric learning instead of approaching it with a classifier model?", "rating": "2: Weak reject", "justification_of_rating": "Even though the results are good, the experimental section is a bit poor, and should be improved to show where does the improve in performance come from. It is not clear why the proposed model is better than previous approaches, as it is not compared with a standard model that learns a classifier on top of the CBAM embeddings. ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct", "recommendation": [], "special_issue": "no"}, {"title": "Review of \"Deep Metric Learning Network using Proxies for Chromosome Classification in Karyotyping Test\"", "paper_type": "validation/application paper", "summary": "The paper proposed Proxy-ResNeXt-CBAM, a metric learning network that has an attention mechanism called CBAM and uses proxies in chromosome classification. The goal is to assist cytogeneticists with karyotyping and help them more efficiently classify chromosomes. Their best model outperforms conventional classification deep learning networks.", "strengths": "The authors utilized the publicly available Bioimage Chromosome Classification dataset. Results on this benchmark seem promising and outperform some recent baselines. The experimental analysis seems thorough", "weaknesses": "- The paper lacks original contributions. Neither deep metric learning with proxy nor CBAM was originally invented. It is thus a typical \"existing A + existing B, applied to some new C\" type of work.\n\n- The definition of \"proxy\" is very much unclear from paper. Is that just the hidden features of CNN, optimized under a cosine distance? If so, the authors over-complicated their description and may have over-stated contribution.\n\n- The motivation of CBAM is very unclear: it looks like the authors adopted that only because \"the classification performance of our network is higher\". Why does it help the proposed metric learning? Why just this specific attention, given the numerous attention mechanisms developed? None of those questions is well justified nor motivated.\n\n- The writeup is not easy to follow, and reading experience is not pleasant. Specifically, the authors seem to often unnecessarily self-repeat, e.g. \"We introduce Proxy-ResNeXtCBAM which is a metric learning-based network using proxies ...\"\" Proxy-ResNeXt-CBAM which is the metric learning network using proxies outperforms ...\"\"Proxy-ResNext is a metric learning network that employs proxies\".", "rating": "1: Strong reject", "justification_of_rating": "See above weakness: 1) lack or original contribution; 2) the definition of \"proxy\" is very much unclear; 3) the motivation of CBAM is very unclear and not well motivated; 4) the writeup is very sloppy", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct", "recommendation": [], "special_issue": "no"}], "comment_id": ["u0yTfW8fsmL", "hj6q21sNPH5"], "comment_cdate": [1585190085689, 1585190267652], "comment_tcdate": [1585190085689, 1585190267652], "comment_tmdate": [1585229831482, 1585229830713], "comment_readers": [["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2020/Conference/Paper331/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper331/Authors", "MIDL.io/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Answer", "comment": "Thank you for your effort to read our paper and helpful comments.\n\nHere is our answers of your comments.\n\n1) How is the class weights p_y set? \n\n::The class weights (p_y) are the proxies. They represent resulting subspaces of class embeddings. In this study, there is a total of 24 proxies for classes (1, 2, 3, ..., X, Y). And each of them has 2048 dimension which is same as the output size of networks.\n\n2) When doing the ablation study of using and not using CBAM do the two models have the same number of parameters?\n\n::As you know, CBAM sequentially infers two separate attention maps inside the network. One is made by the global max pooling and the other is made by the global average pooling. And both of them are concatenated. Therefore, Using CBAM has little bit more parameters. \n\n\n\n\nAdditionally, here is our opinion to the weaknesses.\n1- The motivation of the paper: In the abstract, it is noted: \"In addition, the results of our embedding analysis demonstrate the effectiveness of using proxies in metric learning for optimizing deep convolutional neural networks.\" \nThis puts the emphasis on the proxy-based metric learning approaches. However, there is no ablation study or comparison with other metric learning approaches. \n\n:: This is the first study that metric learning is applied on the public dataset. All the baseline studies used basic convolutional neural networks. Therefore we compared the classification performance with them. \nIn addition, we reported ablation study results of our method. Classification performance of networks which use metric learning method and which does not use metric learning method are listed in Table 3.\n\n2- There is no good motivation for why CBAM layer is used and what it does. The only explanation is \". CBAM sequentially infers two separate attention maps. To adaptively refine attention maps, both attention maps are multiplied to a input feature map.\" I assume the authors are referring to the channel-wise and spatial wise attention modules. Even so, an intuition behind using CBAM in a metric learning embedding function is unclear. \n\n:: As you know, CBAM consists of the channel-wise module and the spatial wise module. In each module, CBAM has two pooling layers, the global max pooling and the global average pooling. The max pooling obtains most remarkable one feature and the average pooling obtains comprehensive one feature. Therefore, using both features is better to represent chromosomes than using features only from global average pooling.\n\n3 - Better explanation of objective functions and the whole training procedure is needed. \n:: As we explain the proxies in the above answer, proxies has same dimension as the output of network(embbedings). Since we used similarity based soft max loss, one embedding and 24 proxies are compared based on the cosine similarity in the training phase. \n\n\nWe wish our answers and opinions can help you understand more precisely. \n\nThank you."}, {"title": "Answer", "comment": "Thank you for your effort to read our paper and helpful comments.\n\nHere is our answers of your comments.\n\n1) What is the justification of using CBAM in this model?\n\n::Deep metric learning aims to measure the similarity between images based on their convolutional embeddings, and uses an optimal distance metric for learning tasks. Embeddings of images from the same class are closer in distance than embeddings of images from different classes. Therefore, accurately converting images into embeddings is crucial. Therefore we employ CBAM which is proved to generate attention maps which is effective to represent images in deep convolutional neural networks.\n\n\n2) Why approach the task with metric learning instead of approaching it with a classifier model?\n\n::In karyotyping task, inter-class type karyograms are highly similar but slightly different. In addition, the \u2018resolution\u2019 of chromosomes which means the length of karyograms makes karyotyping difficult. In the cytogenetics field, long chromosomes with band patterns that cytogeneticists can clearly identify are called 'high resolution chromosomes,' but short chromosomes with compressed band patterns are called 'low resolution chromosomes.' The 'resolution' of chromosomes can vary depending on the stage of cell division, even if the chromosomes are collected from a single patient. \n\nGenerally metric learning outperforms in the tasks which is needed to distinguish objects with similar shape but different. Therefore, metric learning is widely used in face recognition tasks and content based image retrieval tasks. Since we thought karyotyping task is close with these type of tasks, we approach it with the metric learning.\n\n\n\n\n\nAdditionally, here is our opinion to the weaknesses which are not handled in 'Questions To Address In The Rebuttal' .\n1- The authors use CBAM \"to obtain adaptive embedding vectors\". What exactly does that mean? Adaptive to what? Why does it help the model to solve the task?\n\n:: As you know, CBAM consists of the channel-wise module and the spatial wise module. In each module, CBAM has two pooling layers, the global max pooling and the global average pooling. The max pooling obtains most remarkable one feature and the average pooling obtains comprehensive one feature. Therefore, using both features is better to represent chromosomes than using features only from global average pooling.\n\n\n4- Learning the proxies and applying softmax on the dot product between each proxy and each sample embedding is like learning a classifier on the embeddings, where the weights of the classifier are given by the proxies. Then, does the improvement come from the sampling strategy mentioned in the paper?\n:: Sampling strategy helps to learn the model efficiently by sampling the classes balanced. therefore, it can do fast convergence during the training.\n\n\n\nWe wish our answers and opinions can help you understand more precisely. \n\nThank you."}], "comment_replyto": ["ZyoSv_XpK-", "pfLGzPRcCA"], "comment_url": ["https://openreview.net/forum?id=QXpeU5Cb1W&noteId=u0yTfW8fsmL", "https://openreview.net/forum?id=QXpeU5Cb1W&noteId=hj6q21sNPH5"], "meta_review_cdate": 1586006906713, "meta_review_tcdate": 1586006906713, "meta_review_tmdate": 1586006906713, "meta_review_ddate ": null, "meta_review_title": "MetaReview of Paper331 by AreaChair1", "meta_review_metareview": "All three reviewers seem to agree on the fact that the results presented by the paper are promising and outperform the results from recently published baselines on a public chromosome classification dataset. However, the main issue of the paper is a lack of novelty since main contributions (deep metric learning with proxy and CBAM ) have been proposed before. In addition, the paper shows little motivation for deep metric learning with proxy and CBAM. The rebuttal tries to address the motivation issues mentioned above, but not very successfully given that there is a very pragmatic explanation without any rational decision process that can justify the use of deep metric learning with proxy and CBAM. One reviewer also mentions that the paper does not explain the objective functions and the whole training procedure. Given these issues and a rebuttal that did not answered the reviewers' questions, I do not recommend this paper for acceptance.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper331/Area_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=QXpeU5Cb1W&noteId=CV7PcVL3jQ8"], "decision": "accept"}