File size: 32,315 Bytes
fad35ef
1
{"forum": "InBpFcpQF", "submission_url": "https://openreview.net/forum?id=kGyy-v8QAO", "submission_content": {"keywords": ["Deep Metric Learning", "Attention mechanism", "Medical Image", "Subspace Embedding", "Skin lesion imaging", "Interpretability"], "track": "full conference paper", "authorids": ["sukesh.adiga-vasudeva.1@etsmtl.net", "jose.dolz@etsmtl.ca", "herve.lombaert@etsmtl.ca"], "title": "Adding Attention to Subspace Metric Learning", "authors": ["Sukesh Adiga V", "Jose Dolz", "Herve Lombaert"], "abstract": "Deep metric learning is a compelling approach to learn an embedding space where the images from the same class are encouraged to be close and images from different classes are pushed away. Current deep metric learning approaches are inadequate to explain visually which regions contribute to the learning embedding space. Visual explanations of images are particularly of interest in medical imaging, since interpretation directly impacts the diagnosis, treatment planning and follow-up of many diseases. In this work, we propose a novel attention-based metric learning approach for medical images and seek to bridge the gap between visual interpretability and deep metric learning. Our method builds upon a divide-and-conquer strategy, where multiple learners refine subspaces of a global embedding. Furthermore, we integrated an attention module that provides visual insights of discriminative regions that contribute to the clustering of image sets and to the visualization of their embedding features.  We evaluate the benefits of using an attention-based approach for deep metric learning in the tasks of image clustering and image retrieval using a public benchmark on skin lesion detection. Our attentive deep metric learning improves the performance over recent state-of-the-art, while also providing visual interpretability of image similarities. ", "paperhash": "v|adding_attention_to_subspace_metric_learning", "TL;DR": "An attention-based metric learning approach for medical images and seek to bridge a gap between visual interpretability and metric learning.", "paper_type": "both", "pdf": "/pdf/a6b6098116462d55ca47a302a535251c2ec9d9ba.pdf", "_bibtex": "@inproceedings{\nv2020adding,\ntitle={Adding Attention to Subspace Metric Learning},\nauthor={Sukesh Adiga V and Jose Dolz and Herve Lombaert},\nbooktitle={Medical Imaging with Deep Learning},\nyear={2020},\nurl={https://openreview.net/forum?id=kGyy-v8QAO}\n}"}, "submission_cdate": 1579955763915, "submission_tcdate": 1579955763915, "submission_tmdate": 1587172218718, "submission_ddate": null, "review_id": ["OuFwvQ-rEh", "yJv27ciEEF", "cx6Da7GB7U", "pt5itRHD2"], "review_url": ["https://openreview.net/forum?id=kGyy-v8QAO&noteId=OuFwvQ-rEh", "https://openreview.net/forum?id=kGyy-v8QAO&noteId=yJv27ciEEF", "https://openreview.net/forum?id=kGyy-v8QAO&noteId=cx6Da7GB7U", "https://openreview.net/forum?id=kGyy-v8QAO&noteId=pt5itRHD2"], "review_cdate": [1584201800146, 1584147836207, 1584126113159, 1582772921445], "review_tcdate": [1584201800146, 1584147836207, 1584126113159, 1582772921445], "review_tmdate": [1585229284372, 1585229283812, 1585229283311, 1585229282782], "review_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2020/Conference/Paper270/AnonReviewer1"], ["MIDL.io/2020/Conference/Paper270/AnonReviewer4"], ["MIDL.io/2020/Conference/Paper270/AnonReviewer2"], ["MIDL.io/2020/Conference/Paper270/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["InBpFcpQF", "InBpFcpQF", "InBpFcpQF", "InBpFcpQF"], "review_content": [{"title": "Ancillary loss based on metric learning for supervised segmentation tasks with UNet", "paper_type": "both", "summary": "This paper proposes a supervised deep representation learning system combining metric learning losses and attention mechanisms. The proposed architecture builds on the recently proposed metric learning approach \u2018divide and conquer\u2019 proposed par Sanakoyeu et al. The general idea is to first learn a general embedding based on metric learning loss such as the contrastive loss, then perform unsupervised classification (k-means) in the global embedding space to separate the data into subgroups that are further directed to specific embedding models. The main contribution of this paper is to add an attention mechanism to this architecture and evaluate it on a skin lesion dataset from the ISIC 2019 challenge. The authors compare their method with standard embedding methods based on three metric learning losses, namely the contrastive, triplet and margin losses, as well as to the original divide and conquer algorithm without any attention mechanism. They demonstrate that their method compare favourably to all other methods.", "strengths": "The paper is well written, state-of-the art is clear and recent. The main novelty of this paper is to evaluate this newly proposed representation learning as well as improve it by adding the attention mechanism, which is shown to improve performance.", "weaknesses": "Description of the method and experiments lacks details. I have comments regarding the training and testing phase of the whole pipeline depicted on figure 1. I suggest the authors to address these questions to improve the soundness of the proposed methodological contribution.", "questions_to_address_in_the_rebuttal": "I have comments regarding the training and testing phase of the whole pipeline depicted on figure 1:\n\n-It is not clear from figure 1 and from the text, if step 1 (global embedding) and step 2 (subspace embedding) are trained and updated in a end-to-end fashion. The paragraph  \u201cAssuming the learned embedding space is improving over time by each learner, we re-group the images at every T epochs by mapping the images using the entire embedding space such that images can be better sampled. The full embedding space is then composed by merging all learners. The entire image set is subsequently used to stabilize the embedding space.\u201d Should be clarified. Please define what are \u2018the learned embedding space\u2019, \u2018the entire embedding space\u2019, \u2018the full embedding space\u2019. It is not clear if thiese different embeddings correspond to the same or different embedding spaces among the following ones :  the global representation space learned of dimension d after step 1, each subspace embedding of dimension d/K, or the concatenation of each subspace embedding of dimension d. In figure 1, I guess the mapping function of each subspace takes input feature vector of dimension d and not m as printed?\n\n-As far as I understand, the \u2018full embedding space\u2019 corresponds to the concatenations of the K features vectors of dimensions d/K. Then for a training image that has been assigned to cluster k after step 1, this vector would contain 0 except for indices k to k+ d/K-1, that would contain the subspace representation learned by subspace learner k? Please clarify.\n\n-Could you please clarify what happens at every epoch T: do you extract the concatenated feature vector as detailed above and apply a K-means algorithm on this new set of feature vectors to derive new clusters at the beginning of step 2? \nNow considering the test phase, assuming that all models of step 1 and 2 have been trained, what is the path of a test sample? I guess, a test image is passed through step 1 which outputs both a embedding vector of size d and a cluster k. Then, this feature vector is inputted to the corresponding subspace learner k which outputs a representation vector of dimension d/K (where K is the number of classes)? Please correct and clarify the text if I am wrong.  Assuming that this is correct, then, the t-sne visualisation of figure 2 is based on original feature vectors of dimension d/K (where d  =128 and K= 8) for figures b) to f) while it is based on feature vectors of dimension d for figure a)? ", "rating": "3: Weak accept", "justification_of_rating": "The authors propose an original methodological contribution as well as well-conducted evaluation experiments on the ISIC 2019 skin lesion dataset. \nI would rate this paper with 'strong accept' if the description of the methodological was clearer", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct", "recommendation": ["Oral"], "special_issue": "no"}, {"title": "Adding Attention to Subspace Metric Learning", "paper_type": "validation/application paper", "summary": "In this paper, the authors propose the addition of an attention-based metric learning approach for medical images with the goal of introducing visual interpretability for medical learning. \n\nThey use the DivConq approach as in Sanakoyeu et al. that splits the learned embedding space and the data into multiple groups thereby learning independent sets of metric distances over different subspaces. Here the authors extend the DivConq deep metric learning approach for medical imaging. They use the  ResNet-50 architecture for their network. \n", "strengths": "The authors provide quantitative evaluation over a public benchmark dataset of skin lesions as well as compare it to DivConq among other methods and show good results. \n\nThe paper is written clearly. The experiments are described in detail and are sufficiently evaluated. \n", "weaknesses": "\nThe attention model is not described well and is not motivated properly for this problem. Is $A(S(x_i)) \\circ S(x_i)$ a composition operation? \n\nWhile incorporating what the authors call the \"attention model\" is a good idea, this dataset is not the best suited for demonstrating this idea. The authors should choose a more challenging dataset that has multiple lesions or a heterogeneity of tumors. Thus if the attention maps are able to successfully capture relevant information in those datasets, that will test the strength of the model. ", "questions_to_address_in_the_rebuttal": "On this specific dataset, perhaps a weakly trained lesion detector with the multiple subspace learning may also give similar results. The authors should discuss this. \n", "rating": "3: Weak accept", "justification_of_rating": "The paper presents an application of the deep attentional model for image clustering and image retrieval. While many of the ideas have been proposed before, the application to the skin lesion dataset is novel and thus justifies a discussion. \n", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "recommendation": ["Poster"], "special_issue": "no"}, {"title": "Novel attention based metric learning approach for medical image analysis", "paper_type": "methodological development", "summary": "The paper presents a novel algorithm by adding attention to the metric learning scenario for medical image analysis. The algorithmic discussion is provided in detail and the results sufficiently showcase the efficacy of the proposed method over other similar methods in this sub-field. The paper overall makes a good contribution to the field.", "strengths": "- The proposed algorithm showcases that similar to other scenarios/problems, adding attention to the state-of-the art methods (in this case for metric learning) improves the performance overall. \n\n- The results in this regard are good and the method seems to perform better than existing similar methods. \n\n- The proposed algorithm seems to have an inherent advantage of not requiring additional processing during test time.\n\n- The experimental details are explained clearly, contains all the requirement information and is easy to follow. ", "weaknesses": "- The results are shown only for one dataset. It would have been good to see how the proposed method performs on at least two publically available datasets to improve the confidence of reader that the method does work in different scenarios.", "questions_to_address_in_the_rebuttal": "- Can you please include results from some other similar method (other than DivConq) published recently on this dataset, if possible. It would be helpful for the readers to get the additional context", "rating": "3: Weak accept", "justification_of_rating": "The paper makes a novel contribution to the sub-field of metric learning for medical image analysis. The results are quite good and improves over the current state-of-the -art for the dataset considered. ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct", "recommendation": [], "special_issue": "no"}, {"title": "divide and conquer metric learning with attention", "paper_type": "both", "summary": "The authors have added an attention module to the divide and conquer metric learning, which was published in CVPR 2019. The claim is adding attention can help both metric learning and interpretability. The modified method was implemented to skin lesion image retrieval with performance comparison with other metric learning methods. \n\n", "strengths": "1. The idea of adding interpretability to metric learning in medical image analysis is intriguing. \n\n2. The modified method was implemented to the ISIC data.\n\n3. Empirical results show the effectiveness of the modified method. ", "weaknesses": "1. It appears to me that the only difference of the proposed method from the CVPR2019 reference is adding attention modules. Hence, the presented work is incremental with limited novelty. \n\n\n2. The description of some important experimental setups is vague. For example, after \"combining\" subspaces, how that goes back to full embedding space to improve K-means clustering? Or the full embedding is independent of subspace embedding? After embedding, when evaluating NMI and recall based on K-means clustering as well as image retrieval, was the full embedding space used without referring back to the subspace learners? If that is the case, why the attention maps of subspace learners were checked instead of the full embedding space one? \n\n3. Simply based on NMI and recall evaluation, it does not seem adding attention improves too much over the original divide and conquer implementation. Especially, the authors should provide the standard deviation values from 5 runs. \n\n4. If interpretability is one of the goals adding attention, the qualitative analysis of attention maps should be better discussed. It is not clear to me how the visualized ones show that the attention maps \"learned attentions to variations in\" size, scale, artifacts, etc. \n\n5. It may not be appropriate to simply checking the clustering results with K set to 8, which is actually the number of image categories for ISIC dataset the authors used. \n\n\n\n", "questions_to_address_in_the_rebuttal": "1. After \"combining\" subspaces, how that goes back to full embedding space to improve K-means clustering? Or the full embedding is independent of subspace embedding? After embedding, when evaluating NMI and recall based on K-means clustering as well as image retrieval, was the full embedding space used without referring back to the subspace learners? If that is the case, why the attention maps of subspace learners were checked instead of the full embedding space one? \n\n2. Simply based on NMI and recall evaluation, it does not seem adding attention improves too much over the original divide and conquer implementation. Especially, the authors should provide the standard deviation values from 5 runs. \n\n3. If interpretability is one of the goals adding attention, the qualitative analysis of attention maps should be better discussed. It is not clear to me how the visualized ones show that the attention maps \"learned attentions to variations in\" size, scale, artifacts, etc. \n\n4. How clustering/retrieval performances change if K is not equal to 8? \n\n\n", "rating": "2: Weak reject", "justification_of_rating": "The proposed method has limited novelty. It is not clear the selected attention mechanism is the best choice in literature. The presentation is not clear enough. The empirical results are not convincing enough. ", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "recommendation": [], "special_issue": "no"}], "comment_id": ["KZv2xtO9M9T", "Fg0N3UMnczV", "I4UWdB25d-n", "NI1acWWEsFZ", "ipXw4i1AfFk", "RW16IXEo813"], "comment_cdate": [1585993677441, 1585367281757, 1585369040890, 1585367080165, 1585366876405, 1585366725286], "comment_tcdate": [1585993677441, 1585367281757, 1585369040890, 1585367080165, 1585366876405, 1585366725286], "comment_tmdate": [1585993677441, 1585374197798, 1585369040890, 1585367080165, 1585366920986, 1585366725286], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2020/Conference/Paper270/AnonReviewer1", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper270/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper270/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper270/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper270/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper270/Authors", "MIDL.io/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Thank you for your reply", "comment": "I thank the authors for their reply addressing most of my concerns. I would positively revise my rating and consider that this paper should be part of the MIDL program."}, {"title": "Reply to reviewer's comments", "comment": "We thank the reviewer for insightful comments. We would like to emphasize the reviewer on the goal of this paper which is to bring interpretability to deep metric learning (DML) methods, almost unexplored, particularly in medical imaging. The choice of the attention mechanism is not motivated from a best-performance perspective. Instead, while we are aware of the existence of many attention mechanisms in the literature, our goal is to show, simply, that adding attention to a feature subspace can turn DML methods interpretable with the help of generated attention maps. However, investigating the effect of other attention mechanisms, such as combining spatial and channel attention, can be explored in future work. Furthermore, we have integrated several suggestions from the reviewer that strengthen the experimental section. Please find below the detailed answers to the specific concerns.\n\nQ1. clarity on training and evaluation.\nA1. All the blocks/layers in step 1 (corresponding to grouping the data) and step 2 (corresponding to the subspace learners) are actually the same, i.e., they have shared weights. Thus, the parameter updates based on one have an impact on the others. During the first step, images in $\\mathbb{R}^m$ are mapped onto the embedding space ($\\mathbb{R}^d$) and then the data is grouped using K-means. In a second step, the corresponding images, grouped in $\\mathbb{R}^d$, are assigned to each of the individual learners and then each subspace is subsequently learnt. As learning progresses, the data is re-grouped in the first step and thus the learned sub-spaces also evolve. Last, after training starts to converge, the model is fine-tuned with the full embedding space to stabilize the combined subspace. In contrast, during inference, only the first step of the pipeline, i.e., the full embedding space, exists. Therefore, all the evaluation is done by employing the full embedding space $\\mathbb{R}^d$.\n\nQ2. Improvement of adding attention over the original DivConq, and standard deviation (SD) values\nA2. We would like to emphasize that the goal of this work is on bridging the gap between visual interpretability and DML. As such, in addition to improving the state-of-the-art, our proposed method enables visual interpretability of a metric learnt task, infeasible with previous DML approaches. Having said that, we have followed the suggestion of the reviewer and added the SD values from the 5 runs across all the methods. When compared to the DivConq approach, our model increases performance while reducing the SD, and also providing visual attention. In addition, we would like to refer the reviewer to the answer of Q4. Following his/her suggestion, we have evaluated the impact of having a different K value, whose results are reported below in A4. These results show that, with K=12, the gap between the DivConq and our approach is nearly 4% in terms of NMI, much larger than in the case of K=8.\n\nMethod               |         NMI         |       R$@$1\nContrastive loss | 47.74 $\\pm$ 1.30  | 83.92 $\\pm$ 0.63\nTriplet loss          | 85.90 $\\pm$ 1.05  | 94.06 $\\pm$ 0.90\nMargin loss         | 90.09 $\\pm$ 2.17  | 98.37 $\\pm$ 0.84\nDivConq               | 93.08 $\\pm$ 3.57  | 99.59 $\\pm$ 0.18\nOurs                      | 94.13 $\\pm$ 2.76  | 99.61 $\\pm$ 0.13\n\nQ3. Qualitative analysis of attention maps should be better discussed.\nA3. We will take into consideration the concern of the reviewer regarding visualization of attention maps that shows learned attention to variations in size, scale or other artifacts. We have uploaded in an anonymized repository (link is below) a set of images that address these concerns. Particularly, these images show how the attention maps are successfully generated under different situations, such as different shapes, multiple targets or size variations.\nLink:  https://github.com/anonymous3578/paper270/blob/master/README.md#attention-maps\n\nQ4. How clustering/retrieval performances change if K is not equal to 8?\nA4. We want to clarify that the initial choice of K is not based on the number of classes, as suggested by the reviewer, but on the best value obtained in the original DivConq approach. We have followed the suggestion of the reviewer and re-run the experiments on a different K value (K=12). The results of these experiments for both the original DivConq and our approach are reported below. In this table, we can observe that if we increase the value of K, our method not only improves the performance but also increases the gap with respect to the original DivConq approach. Particularly, there is a gain of nearly 4% in performance, while the standard deviation is reduced by 1.5%. Because of time constraints, we did not perform a comprehensive analysis of the impact of the value of K. Nevertheless, we plan to add this in an extended work.\n\nmethod  |       NMI          |       R$@$1\nDivConq | 93.64 $\\pm$ 2.19 | 99.65 $\\pm$ 0.12\nOurs        | 97.81 $\\pm$ 0.72 | 99.74 $\\pm$ 0.04"}, {"title": "Reply to reviewer's comments", "comment": "We first thank all our reviewers for their valuable time for providing positive and constructive feedback on our paper. The general reviewer's concerns were a possible confusion in the training and evaluation part as well as strengthening the motivation of our attention module for improving deep metric learning (DML). Both are explained in detail individually, below. The main goal of this paper is to bridge the gap between visual interpretability and deep metric learning, which is almost unexplored in medical imaging. To improve the current state-of-the-art, we have shown that exploiting an attention mechanism can turn the DML network into explainable results with the help of generated attention maps. Moreover, this novel approach enables further research on downstream tasks of prediction and segmentation among the medical imaging community. The specific reviewer's concerns are addressed individually below. We will incorporate the required changes in our paper."}, {"title": "Reply to reviewer's comments", "comment": "We would like to thank the reviewer for having appreciated our novel contribution and his positive suggestions. As highlighted by the reviewer, the methodology is properly presented, and the contribution and results are convincing. Specific concerns are addressed below.\n\nQ1. Results on other similar methods (other than DivConq) published recently on this dataset, if possible.\nA1. We agree with the reviewer that including additional methods in the evaluation will definitely strengthen the paper. Unfortunately, works focusing on this dataset rely on classification approaches, which are in nature different from deep metric learning. This makes a direct comparison with our approach to a challenging task. On the same line, this difference in the nature of these additional methods also explains why we have included in our evaluation a few other deep learning metric approaches that have been employed in medical imaging, i.e., contrastive, margin and triplet losses. While these techniques have attracted a lot of attention in the computer vision community, literature in medical imaging remains scarce. Thus, another goal of this work is to encourage and stem further research on using attention mechanisms in deep metric learning with applications in medical imaging in order to improve the interpretability of the tasks of interest.\n"}, {"title": "Reply to reviewer's comments", "comment": "We would like to thank the reviewer for his helpful comments and possible improvements. As highlighted by the reviewer, the paper is presented clearly and results are sufficiently evaluated and convincing. The main concern was that the motivation for our attention model needs further strengthening. This is addressed below:\n\nThe main idea of this paper is to propose a methodology that leverages a state-of-the-art deep metric learning (DML) approach to improve interpretability of a learned task, which is extremely useful in medical imaging. Top-down or bottom-up strategies (e.g., classification activation maps (CAM)) have been proposed to explain the decision of classification neural networks strategies. Unfortunately, DML does not rely on a class prediction score, but on a similarity metric. Obtaining such interpretable explanations in deep metric networks is, therefore, not straightforward. However, deep attention has shown that deep networks could focus their learning on important regions of the images, resulting in more focused models. Since adding attention does not require knowing the classes (i.e., it can be learned in an unsupervised manner), we propose to include an attention mechanism in a DML network to interpret its predictions. More precisely, we have employed an attention module, consisting of three 3$\\times$3 convolution layers with filters size of {128, 32, 1}, each with a ReLU activation and final layer with a Sigmoid activation, to produce an attentive map. It is then multiplied to the output of feature encoding i.e $A(S(x_i)) \\circ S(x_i)$, where $\\circ$ is the element-wise multiplication of the attention map on the output of the feature encoding. We will include a figure depicting the details of the attention module employed in this work.\n\nWe agree that adding an additional dataset will strengthen the results of the proposed model. Nevertheless, we are limited to the accessibility of public datasets. We would be more than happy to consider additional datasets that the reviewer may suggest for this specific method. Having said that, we have uploaded (link is below) a set of images that address these concerns. Particularly, these images show how the attention maps are successfully generated under different situations, such as different shapes, multiple targets or size variations, demonstrating that it is invariant to heterogeneity in the target.\nLink: https://github.com/anonymous3578/paper270/blob/master/README.md#attention-maps\n\nQ1. Perhaps a weakly trained lesion detector with the multiple subspace learning may also give similar results. The authors should discuss this. \nA1. We are sorry that we do not fully understand the suggestion of the reviewer. We would like to ask whether s/he can please elaborate on this suggestion."}, {"title": "Reply to reviewer's comments", "comment": "We thank the reviewer for the detailed comments of our paper and appreciate the positive feedback for improving our paper. The main concern is on clarifying the training and testing phases, which are addressed in detail below.\n \nQ1. \u201cclarity on training and testing in Figure 1\u201d\nA1. We will clarify in our manuscript that the layers exploiting our global embedding and subspace embedding blocks do share the same weights, and are trained end-to-end. The subspace embedding layer (i.e., the final layer) is obtained by dividing the embedding layer (d-dimension vector) into K-subspace (subvectors) of equal size (i.e. $d/K$ size). Note that the learned or full or entire embedding space all are the same vectors of d-dimension.\n\nIn the training phase, the first step maps the data to a lower dimension using a global embedding network (the network is initialized with a random map, which learns over the epochs a grouping of similar data). The data is subsequently clustered using K-means. Each clustered data is consequently assigned to an individual subspace learner, where their corresponding images are used to train each subspace ($d/K$-dimension). Here, the mapping associates an image ($m$-dimension) to a $d/K$-dimension subspace. Over the learning phase, the data is re-grouped every $T$ epochs using the entire embedding layers (concatenating all $K$ sub-vectors) to benefit from a full sampling of the data. This is repeated until learning converges. Finally, we employ a finetuning of the entire embedding (d-dimension) with all the data, in order to stabilize the embedding space. In the testing phase, the images are directly mapped using the entire embedding layer (i.e., $m$ to $d$-dimension mapping).\n\nQ2. clarity regarding the subspace learner k.\nA2. Yes, the full embedding space is a concatenation of $K$ subspaces, each having $d/K$ size. Each subspace learner is learnt individually using the images associated with each cluster. For example, the $k^{th}$ subspace learns indices $k*d/K$ to $(k+1)*(d/K)-1$. The loss for the $k^{th}$ learner is calculated using these indices (subvector). Essentially, each learner learns a part of the full embedding space.\n\nQ3. clarity on recluster, testing and visualization dimension of t-SNE\nA3. The recluster and testing steps are explained in the answer A1 above. In our testing, the images are mapped using the embedding space of d-dimensions. The t-SNE visualization in figure 2 is on the d-dimensional embedding for both ours and baseline methods.\n"}], "comment_replyto": ["RW16IXEo813", "pt5itRHD2", "InBpFcpQF", "cx6Da7GB7U", "yJv27ciEEF", "OuFwvQ-rEh"], "comment_url": ["https://openreview.net/forum?id=kGyy-v8QAO&noteId=KZv2xtO9M9T", "https://openreview.net/forum?id=kGyy-v8QAO&noteId=Fg0N3UMnczV", "https://openreview.net/forum?id=kGyy-v8QAO&noteId=I4UWdB25d-n", "https://openreview.net/forum?id=kGyy-v8QAO&noteId=NI1acWWEsFZ", "https://openreview.net/forum?id=kGyy-v8QAO&noteId=ipXw4i1AfFk", "https://openreview.net/forum?id=kGyy-v8QAO&noteId=RW16IXEo813"], "meta_review_cdate": 1586294559657, "meta_review_tcdate": 1586294559657, "meta_review_tmdate": 1586294559657, "meta_review_ddate ": null, "meta_review_title": "MetaReview of Paper270 by AreaChair1", "meta_review_metareview": "Three groups of selected quotes from reviewers that were not fully addressed by the rebuttal and are sufficient to justify a reject:\n\n1) the only difference of the proposed method from the CVPR2019 reference is adding attention modules...\nthe presented work is incremental with limited novelty. \n\n2) The attention model is not described well and is not motivated properly for this problem\n\n3) The results are shown only for one dataset... The empirical results are not convincing enough...\n this dataset is not the best suited for demonstrating this idea", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper270/Area_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=kGyy-v8QAO&noteId=SCxz5PvugHn"], "decision": "accept"}