AMSR / conferences_raw /iclr19 /ICLR.cc_2019_Conference_B1exrnCcF7.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
25.8 kB
{"forum": "B1exrnCcF7", "submission_url": "https://openreview.net/forum?id=B1exrnCcF7", "submission_content": {"title": "Disjoint Mapping Network for Cross-modal Matching of Voices and Faces", "abstract": "We propose a novel framework, called Disjoint Mapping Network (DIMNet), for cross-modal biometric matching, in particular of voices and faces. Different from the existing methods, DIMNet does not explicitly learn the joint relationship between the modalities. Instead, DIMNet learns a shared representation for different modalities by mapping them individually to their common covariates. These shared representations can then be used to find the correspondences between the modalities. We show empirically that DIMNet is able to achieve better performance than the current state-of-the-art methods, with the additional benefits of being conceptually simpler and less data-intensive.", "keywords": ["cross-modal matching", "voices", "faces"], "authorids": ["yandongw@andrew.cmu.edu", "mahmoudi@andrew.cmu.edu", "wyliu@gatech.edu", "bhiksha@cs.cmu.edu", "rsingh@cs.cmu.edu"], "authors": ["Yandong Wen", "Mahmoud Al Ismail", "Weiyang Liu", "Bhiksha Raj", "Rita Singh"], "pdf": "/pdf/709cb2d7c2701b45fb771f6c972463b7441ecfc8.pdf", "paperhash": "wen|disjoint_mapping_network_for_crossmodal_matching_of_voices_and_faces", "_bibtex": "@inproceedings{\nwen2018disjoint,\ntitle={Disjoint Mapping Network for Cross-modal Matching of Voices and Faces},\nauthor={Yandong Wen and Mahmoud Al Ismail and Weiyang Liu and Bhiksha Raj and Rita Singh},\nbooktitle={International Conference on Learning Representations},\nyear={2019},\nurl={https://openreview.net/forum?id=B1exrnCcF7},\n}"}, "submission_cdate": 1538087992037, "submission_tcdate": 1538087992037, "submission_tmdate": 1556220681519, "submission_ddate": null, "review_id": ["r1e5d52hhQ", "Hkxxm_Qsh7", "HkxvRZesnQ"], "review_url": ["https://openreview.net/forum?id=B1exrnCcF7&noteId=r1e5d52hhQ", "https://openreview.net/forum?id=B1exrnCcF7&noteId=Hkxxm_Qsh7", "https://openreview.net/forum?id=B1exrnCcF7&noteId=HkxvRZesnQ"], "review_cdate": [1541356145597, 1541253144146, 1541239246566], "review_tcdate": [1541356145597, 1541253144146, 1541239246566], "review_tmdate": [1541533076626, 1541533076426, 1541533076187], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1exrnCcF7", "B1exrnCcF7", "B1exrnCcF7"], "review_content": [{"title": "Covariates factors are learned from voice and image data using CNNs. A logistic classifier is trained for cross-modal matching from covariates. ", "review": "Authors aim to reveal relevant dependencies between voice and image data (under a cross-modal matching framework) through common covariates (gender, ID, nationality). Each covariate is learned using a CNN from each provided domain (speak recordings and face images), then, a classifier is determined from a shared representation, which includes the CNN outputs from voice-based and image-based covariate estimations. The idea is interesting, and the paper ideas are clear to follow.\n\nPros:\n- New insights to support cross-modality matching from covariates.\n- Competitive results against state-of-the-art.\n-Convincing experiments.\n\nCons:\n-Fixing the output dimension to d (for both voice and image-based CNN outputs) could lead to unstable results. Indeed, the comparison of voice and face-based covariate estimates are not entirely fair due to the intrinsic dimensionality can vary for each domain. Alternatives as canonical correlation analysis can be coupled to joint properly both domains.\n- Table 4 - column ID results are not convincing (maybe are not clear for me).", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Review of Disjoint Mapping Network for Cross-modal Matching of Voices and Faces", "review": "# Summary\n\nThe article proposes a deep learning-based approach aimed at matching face images to voice recordings belonging to the same person. \n\nTo this end, the authors use independently parametrized neural networks to map face images and audio recordings -- represented as spectrograms -- to embeddings of fixed and equal dimensionality. Key to the proposed approach, unlike related prior work, these modules are not directly trained on some particular form of the cross-modal matching task. Instead, the resulting embeddings are fed to a modality-agnostic, multiclass logistic regression classifier that aims to predict simple covariates such as gender, nationality or identity. The whole system is trained jointly to maximise the performance of these classifiers. Given that (face image, voice recording) pairs belonging to the same person must share equal for these covariates, the neural networks embedding face images and audio recordings are thus indirectly encouraged to map face images and voice recordings belonging to the same person to similar embeddings.\n\nThe article concludes with an exhaustive set of experiments using the VGGFace and VoxCeleb datasets that demonstrates improvements over prior work on the same set of tasks.\n\n# Originality and significance\n\nThe article follows-up on recent work [1, 2], building on their original application, experimental setup and model architecture. The key innovation of the article, compared to the aforementioned papers, lies on the idea of learning face/voice embeddings to maximise their ability to predict covariates, rather than by explicitly trying to optimise an objective related to cross-modal matching. While the fact that these covariates are strongly associated to face images and audio recordings had already been discussed in [1, 2], the idea of actually using them to drive the learning process is novel in this particular task.\n\nWhile the article does not present substantial, general-purpose methodological innovations in machine learning, I believe it constitutes a solid application of existing techniques. Empirically, the proposed covariate-driven architecture is demonstrated to lead to better performance in the (VGGFace, VoxCeleb) dataset in a comprehensive set of experiments. As a result, I believe the article might be of interest to practitioners interested in solving related cross-modal matching tasks.\n\n# Clarity\n\nThe descriptions of the approach, related work and the different experiments carried out are written clearly and precisely. Overall, the paper is rather easy to read and is presented using a logical, easy-to-follow structure.\n\nIn my opinion, perhaps the only exception to that claim lies in Section 3.4. If possible, I believe the Seen-Heard and Unseen-Unheard scenarios should be introduced in order to make the article self-contained. \n\n# Quality\n\nThe experimental section is rather exhaustive. Despite essentially consisting of a single dataset, it builds on [1, 2] and presents a solid study that rigorously accounts for many factors, such as potential confounding due to gender and/or nationality driving prediction performance in the test set. \n\nMultiple variations of the cross-modal matching task are studied. While, in absolute terms, no approach seems to have satisfactory performance yet, the experimental results seem to indicate that the proposed approach outperforms prior work.\n\nGiven that the authors claimed to have run 5 repetitions of the experiment, I believe reporting some form of uncertainty estimates around the reported performance values would strengthen the results.\n\nHowever, I believe that the success of the experimental results, more precisely, of the variants trained to predict the \"covariate\" identity, call into question the very premise of the article. Unlike gender or nationality, I believe that identity is not a \"covariate\" per se. In fact, as argued in Section 3.1, the prediction task for this covariate is not well-defined, as the set of identities in the training, validation and test sets are disjoint. In my opinion, this calls into question the hypothesis that what drives the improved performance is the fact that these models are trained to predict the covariates. Rather, I wonder if the advantages are instead a \"fortunate\" byproduct of the more efficient usage of the data during the training process, thanks to not requiring (face image, audio recording) pairs as input.\n\n# Typos\n\nSection 2.4\n1) \"... image.mGiven ...\"\n2) Cosine similarity written using absolute value |f| rather than L2-norm ||f||_{2}\n3) \"Here we are give a probe input ...\"\n\n# References\n\n[1] Nagrani, Arsha, Samuel Albanie, and Andrew Zisserman. \"Learnable PINs: Cross-Modal Embeddings for Person Identity.\" arXiv preprint arXiv:1805.00833 (2018).\n[2] Nagrani, Arsha, Samuel Albanie, and Andrew Zisserman. \"Seeing voices and hearing faces: Cross-modal biometric matching.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Networks that predict covariates of multimodal inputs like identity and gender produce better representations for cross-modal matching and retrieval tasks than directly predicting cross-modal matches. Paper and well written and experiments are thorough.", "review": "This paper aims at matching people's voices to the images of their faces. It describes a method to train shared embeddings of voices and face images. The speech and image features go through separate neural networks until a shared embedding layer. Then a classification network is built on top of the embeddings from both networks. The classification network predicts various combinations of covariates of faces and voices: gender, nationality, and identity. The input to the classification network is then used as a shared representation for performing retrieval and matching tasks.\n\nCompared with similar work from Nagrani et al (2018) who generate paired inputs of voices and faces and train a network to classify if the pair is matched or not, the proposed method doesn't require paired inputs. It does, however, require inputs that are labeled with the same covariates across modalities. My feeling is that paired positive examples are easier to obtain (e.g., from unlabeled video) than inputs labeled with these covariates, although paired negative examples require labeling and so may be as difficult to obtain.\n\nSeveral different evaluations are performed, comparing networks that were trained to predict all subsets of identity, gender, and nationality. These include identifying a matching face in a set of faces (1,2 or N faces) for a given voice, or vice versa. Results show that the network that predicts identity+gender tends to work best under a variety of careful examinations of various stratifications of the data. These stratifications also show that while gender is useful overall, it is not when the gender of imposters is the same as that of the target individual. The results also show that even when evaluating the voices and faces not shown in the training data, the model can achieve 83.2% AUC on unseen/unheard individuals, which outperforms the state-of-the-art method from Nagrani et al (2018).\n\nAn interesting avenue of future work would be using the prediction of these covariates to initialize a network and then refine it using some sort of ranking loss like the triplet loss, contrastive loss, etc.\n\n\nWriting:\n* Overall, ciations are all given in textual form Nagrani et al (2018) (in latex this is \\citet{} or \\cite{}), when many times parenthetical citations (Nagrani et al, 2018) (in latex this is \\citep{}) would be more appropriate.\n* The image of the voice waveform in Figures 1 and 2 should be replaced by log Mel-spectrograms in order to illustrate the network's input.\n* \"state or art\" instead of \"state-of-the-art\" on page 3. \n* In subsection 2.4: \"mGiven\" is written instead of \"Given\". \n* On Page 6 Section 3.1 \"1:2 matching\" paragraph. \"Nagrani et al.\" is written twice. * * Page 6 mentions that there is a row labelled \"SVHF-Net\" in table 2, but there is no such row is this table. \n* Page 7 line 1, \u201cG,N\u201d should be \"G, N\".\n", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["r1gPRr2jT7", "r1lCSghsam", "ByxLt0iipQ"], "comment_cdate": [1542337998986, 1542336582248, 1542336126394], "comment_tcdate": [1542337998986, 1542336582248, 1542336126394], "comment_tmdate": [1542337998986, 1542336582248, 1542336240337], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2019/Conference/Paper1510/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper1510/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper1510/Authors", "ICLR.cc/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Rebuttal for reviewer 2", "comment": "We sincerely appreciate the review for the recognition of our novelty and many valuable suggestions.\n\nOur main contribution mainly lies in proposing a cross modal matching framework called DIMNet, which learns a shared representation for different modalities by mapping them individually to their common covariates. Our basic intuition is that if the learned embeddings of voices and faces can be correctly classified by a unified (linear) classifier, the embeddings of the same class should be in a common decision region and close to each other.\nCompared to the existing work [3,4], the supervision could be any combination of covariates, which enables us to isolate and analyze the effect of the individual covariate to the learned embeddings. Moreover, DIMNet makes better use of the multiple covariates in the course of training. \n\nIn order to perform fair comparisons, we exactly follow the experimental setup in pioneering work [3,4], and achieve significant improvements compared to these strong baselines [3,4].\n\nQ1. In my opinion, perhaps the only exception ... in order to make the article self-contained.\nA1. We thank the reviewer for this suggestion. We do mention the two scenarios in the paper, but the reviewer is right, we do not explicitly introduce them. We now do so in the updated paper.\n\nIn summary, the audio data we used in Section 3.4 is the same as those in other experiment sections, while the visual data is extracted from the video frames in VoxCeleb dataset at 25/6 fps. For fair comparison, we follow the train/val/test split strategy from [4] and evaluate our DIMNet models under Seen-Heard (closed-set) and Unseen-Unheard (open-set)scenarios. More details can be found in the updated paper.\n\nAction taken: Provided more details about the datasets, and experimental settings in Section 3.4 and appendix A.\n\n\nQ2. Given that the authors claimed to have run 5 repetitions ... strengthen the results.\nA2. We thank the reviewer for this suggestion. We have now computed the standard deviations of the results and added them to each table.\n\nAction taken: Added standard deviations of the results to each table.\n\nQ3. However, I believe that the success of the experimental results, ..., validation and test sets are disjoint.\nA3. Our definition of covariate, as stated in the paper, are the ID-sensitive factors that can simultaneously affect voice and face, e.g. nationality, gender, identity, etc. We do not require the value these factors take to be the same between the training and test set. Thus, from the perspective of our model, we only require that faces and voices in the test set co-vary with ID; we do not require that ID to be present in training. What we are learning is the nature of the covariation with the variable in general, not merely the covariation with the specific values the variable takes in the training set.\n\nTo give another example, if we were to consider age as a covariate (which we have not in the current set of experiments, since we do not desire age-sensitive matching), we would expect to learn how both voice and face embeddings vary with age. This then could be used to match voice and face embeddings in the test set even if the corresponding age were not observed in training.\n\nAction taken: Added the above discussions about covariates to introduction section.\n\nQ4. In my opinion, this calls into question the hypothesis ..., thanks to not requiring (face image, audio recording) pairs as input.\nA4. More efficient usage of the data is indeed one of the advantages of our DIMNet framework, as we state in both the introduction and the discussion. And this is achieved, by design, by exploiting (and explicitly modelling) the dependence between the modalities and covariates in a generalizable manner. The outcomes we observe in our experiments are entirely to be expected, from our hypothesis, and we believe that the rather detailed set of experiments (and the analyses in our appendix) show that the results are not merely fortuitous. As indicated by our experiments, DIMNet-I achieves 83.45% accuracy on 1:2 matching task since ID is undoubtedly the most informative covariate. Even using less informative covariates, DIMNet-G still achieves 72% matching accuracy.\n \nQ5. Typos\nA5. We thank the reviewer for the pointing out the typos. All the typos are fixed in the updated paper.\n\"... image.mGiven ...\" -> \"... image. Given ...\"\n|Fv||Ff| -> ||Fv||_2||Ff||_2\n\"Here we are give a probe input ...\" -> Here we are given a probe input \u2026\u201d\n\n[3] Nagrani, Arsha, et al. \"Seeing voices and hearing faces: Cross-modal biometric matching.\" IEEE CVPR 2018.\n[4] Nagrani, Arsha, et al. \"Learnable PINs: Cross-Modal Embeddings for Person Identity.\" arXiv preprint arXiv:1805.00833 (2018).\n[5] Chung, Joon Son, et al. \"Out of time: automated lip sync in the wild.\" ACCV, 2016."}, {"title": "Rebuttal for reviewer 3", "comment": "We thank the reviewer for the very positive and encouraging review. \n\nQ1. My feeling is that paired positive examples are easier to obtain (e.g., from unlabeled video) than inputs labeled with these covariates, although paired negative examples require labeling and so may be as difficult to obtain.\n\nA1. We agree with the reviewer. Compared to covariates, the pairwise label is usually easier to obtain. However, some challenges still exist for collecting the examples from video, making it a non-trivial problem. For example, the cases of reaction shots, flashbacks and dubbing in videos may result in noisy labels. Previous work [6] investigated the use of the paired data in self-supervised learning manner, where SyncNet [7] is adopted to obtain the speaking faces.\n\nFor our paper, we focus on proposing a DIMNet framework to learn embeddings for cross-modal matching with the given cross-modal data and their labeled covariates. How to collect data is perhaps beyond the scope of this paper but could be an interesting direction for our future work.\n\nQ2. Typos\nA2. We thank the reviewer for pointing out the typos. All the typos are fixed in the updated paper.\nCitations: we have carefully checked the citations and accordingly fixed them one by one .\nFigures: The waveforms have been replaced by log Mel-spectrograms.\n\u201cstate or art\u201d -> \u201cstate-of-the-art\u201d\n\u201cmGiven\u201d -> \u201cGiven\u201d\n\"Nagrani et al. Nagrani et al. (2018b)\" -> \u201cNagrani et al. (2018b)\u201d; typo in Table 2 is fixed\n\u201cG,N\u201d -> \"G, N\"\n\n[6] Nagrani, Arsha, Samuel Albanie, and Andrew Zisserman. \"Learnable PINs: Cross-Modal Embeddings for Person Identity.\" arXiv preprint arXiv:1805.00833 (2018).\n[7] Chung, Joon Son, and Andrew Zisserman. \"Out of time: automated lip sync in the wild.\" Asian Conference on Computer Vision. Springer, Cham, 2016."}, {"title": "Rebuttal for reviewer 1", "comment": "We thank the reviewer for the recognition of the novelty and the detailed experimental evaluation in our contribution.\n\nQ1. Fixing the output dimension to d (for both voice and image-based CNN outputs) could lead to unstable results. Indeed, the comparison of voice and face-based covariate estimates are not entirely fair due to the intrinsic dimensionality can vary for each domain. Alternatives as canonical correlation analysis can be coupled to joint properly both domains.\nA1. In order to compare embeddings from two modalities (domains), the dimensionality of the embeddings need to be the same. We agree with the reviewer that the intrinsic dimensionality of data in different modalities (domains) could vary. However, it does not contradict the fact that these data can be well represented by the identical-dimensioned embeddings through CNNs, and most importantly, the performance (in the following table) is very stable within a wide range of embedding dimension, showing that the accuracy is not sensitive to the embedding dimension. The idea of using the identical-dimensioned embeddings is also adopted by [1] and [2].\n\nThe accuracies of DIMNet-I with different embedding dimensions on 1:2 matching experiments\n-------------------------------------------------------------------------------\nDimension 32 64 128 256 512\n-------------------------------------------------------------------------------\nDIMNet-I 82.20 83.45 83.87 83.43 83.16\n-------------------------------------------------------------------------------\n\nAction taken: Added this experiment in appendix A with analysis.\n\nCanonical correlation analysis (CCA) is a good idea to investigate the correlation of data between different domains, and it could indeed be used to match different-dimensioned embeddings derived from the two modalities, and was indeed one of our ideas enroute to the development of DIMNet. The reasons we do not use it are the following: (a) The final projection in CCA is a linear transform that is easily subsumed within the network (in fact a linear projection may be viewed as a fully-connected layer with linear activations). (b) More importantly, the underlying idea of CCA is very different from DIMNet. Specifically, CCA requires one-to-one correspondence between the two modalities it considers, an assumption DIMNet explicitly tries to avoid. Specifically, in the case of static face images vs. voice samples, it is unclear that such correspondence is derivable. Given that we have multiple face images and multiple voice recordings for any person, all captured at different times, which pairs of voice recordings and face images would we group together? Any correspondence imposed would be artificial. On the other hand, DIMNet builds correspondences between voices (or faces) and their covariates, and does not expect direct correspondence between the two modalities -- in fact this is one of the key features of our model which differentiates it from prior work. The comparison could be more intuitively noted from Fig. 1 in our paper.\n\nQ2. Table 4 - column ID results are not convincing (maybe are not clear for me).\nA2. The ID column in Table 4 shows the mean average precision (mAP) of the retrieved ID, when one modality (e.g. face) is posed as the query and retrieval of corresponding recordings of the other modality (e.g. voice) must be performed. The evaluation dataset consists of 21,799 voices and 58,420 faces, both from 182 identities. Compared to gender (2 classes) and nationality (unbalanced 28 classes), it is a challenging problem to rank the gallery voices (faces) based on the probe face (voice) given these many identities (182 classes). Chance-level performance (i.e., random guess) is about 0.55% for voice->face and 0.58% for face->voice, while we achieved 1.07~4.25% for voice->face and 1.03%~4.17% for face->voice. It means that the DIMNet models do learn useful associations between voices and faces.\n\nAction taken: Added one row of chance level results to Table 4 with analysis.\n\n[1] Nagrani, Arsha, Samuel Albanie, and Andrew Zisserman. \"Learnable PINs: Cross-Modal Embeddings for Person Identity.\" arXiv preprint arXiv:1805.00833 (2018).\n[2] Kim, Changil, et al. \"On Learning Associations of Faces and Voices.\" arXiv preprint arXiv:1805.05553 (2018).\n"}], "comment_replyto": ["Hkxxm_Qsh7", "HkxvRZesnQ", "r1e5d52hhQ"], "comment_url": ["https://openreview.net/forum?id=B1exrnCcF7&noteId=r1gPRr2jT7", "https://openreview.net/forum?id=B1exrnCcF7&noteId=r1lCSghsam", "https://openreview.net/forum?id=B1exrnCcF7&noteId=ByxLt0iipQ"], "meta_review_cdate": 1545310113192, "meta_review_tcdate": 1545310113192, "meta_review_tmdate": 1545354474370, "meta_review_ddate ": null, "meta_review_title": "metareviw", "meta_review_metareview": "All reviewers agree that the proposed method interesting and well presented. The authors' rebuttal addressed all outstanding raised issues. Two reviewers recommend clear accept and the third recommends borderline accept. I agree with this recommendation and believe that the paper will be of interest to the audience attending ICLR. I recommend accepting this work for a poster presentation at ICLR.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper1510/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1exrnCcF7&noteId=HkxYjkGKxN"], "decision": "Accept (Poster)"}