{"forum": "B1GIQhCcYm", "submission_url": "https://openreview.net/forum?id=B1GIQhCcYm", "submission_content": {"title": "Unsupervised one-to-many image translation", "abstract": "We perform completely unsupervised one-sided image to image translation between a source domain $X$ and a target domain $Y$ such that we preserve relevant underlying shared semantics (e.g., class, size, shape, etc). \nIn particular, we are interested in a more difficult case than those typically addressed in the literature, where the source and target are ``far\" enough that reconstruction-style or pixel-wise approaches fail.\nWe argue that transferring (i.e., \\emph{translating}) said relevant information should involve both discarding source domain-specific information while incorporate target domain-specific information, the latter of which we model with a noisy prior distribution. \nIn order to avoid the degenerate case where the generated samples are only explained by the prior distribution, we propose to minimize an estimate of the mutual information between the generated sample and the sample from the prior distribution. We discover that the architectural choices are an important factor to consider in order to preserve the shared semantic between $X$ and $Y$. \nWe show state of the art results on the MNIST to SVHN task for unsupervised image to image translation.", "keywords": ["Image-to-image", "Translation", "Unsupervised", "Generation", "Adversarial", "Learning"], "authorids": ["samuel.lavoie-marchildon@umontreal.ca", "sebastien.lachapelle@umontreal.ca", "mikbinkowski@gmail.com", "aaron.courville@gmail.com", "yoshua.umontreal@gmail.com", "devon.hjelm@microsoft.com"], "authors": ["Samuel Lavoie-Marchildon", "Sebastien Lachapelle", "Miko\u0142aj Bi\u0144kowski", "Aaron Courville", "Yoshua Bengio", "R Devon Hjelm"], "TL;DR": "We train an image to image translation network that take as input the source image and a sample from a prior distribution to generate a sample from the target distribution", "pdf": "/pdf/91fe2252aae23d28681a05d8039bf213989a7fd1.pdf", "paperhash": "lavoiemarchildon|unsupervised_onetomany_image_translation", "_bibtex": "@misc{\nlavoie-marchildon2019unsupervised,\ntitle={Unsupervised one-to-many image translation},\nauthor={Samuel Lavoie-Marchildon and Sebastien Lachapelle and Miko\u0142aj Bi\u0144kowski and Aaron Courville and Yoshua Bengio and R Devon Hjelm},\nyear={2019},\nurl={https://openreview.net/forum?id=B1GIQhCcYm},\n}"}, "submission_cdate": 1538087966304, "submission_tcdate": 1538087966304, "submission_tmdate": 1545355412166, "submission_ddate": null, "review_id": ["HkewPqKK2X", "r1xDSgvJ3Q", "H1gp9H-xhX"], "review_url": ["https://openreview.net/forum?id=B1GIQhCcYm¬eId=HkewPqKK2X", "https://openreview.net/forum?id=B1GIQhCcYm¬eId=r1xDSgvJ3Q", "https://openreview.net/forum?id=B1GIQhCcYm¬eId=H1gp9H-xhX"], "review_cdate": [1541147231041, 1540481086564, 1540523413319], "review_tcdate": [1541147231041, 1540481086564, 1540523413319], "review_tmdate": [1543329455614, 1541533196690, 1541533196488], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1GIQhCcYm", "B1GIQhCcYm", "B1GIQhCcYm"], "review_content": [{"title": "Good formulation, but not novel and short comparison", "review": "==== After rebuttal === \nI thank the authors for responses. I carefully read the response. But it is difficult to find a reason to increase the score. So, I keep my score. \n====================\n\nUnsupervised image-to-image (I2I) translation is an important issue due to various applications and it is still challenging when applied to diverse image data and data where domain gap is large. This paper employs a neural mutual information estimator (MINE) to deal with I2I translation between two domains where there is a large gap. However, this paper contains several issues.\n1. Pros. and Cons.\n (+) Mathematical definition of I2I translation\n (+) Application of mutual information for conserving content.\n (-) Lack of comparison with recent I2I models\n (-) Lack of experimental results and ablation studies \n (-) Unclear novelty\n2. Major comments\n - The novelty of this paper is not clear. Excluding the mathematical definition, it seems that the proposed TI simply combines DCGAN and MINE-based statistical networks. For clarifying the novelty, the detailed architecture and final objective functions can be helpful. \n - Recent works on unsupervised I2I translation are omitted including UNIT [1], MUNIT [2], and DRIP [3]. Also, the authors need to clarify the main difference of TI-GAN from comparing models.\n - It is not clear to relate the mathematical definition of domain transfer to one-to-many translation within large domain gap. \n - It is not clear how to use mutual information (MINE) for learning. There is no explicit definition of loss function considering MINE term. \n - It is short of comparing other state-of-the art models such as UNIT, MUNIT, DRIP, and AugCycleGAN. They compared their results with CycleGAN only.\n - Experiments are not enough to support the authors\u2019 insist. There is not any quantitative metric or qualitative result on generating edge-to-shoes. \n - It is difficult to read due to inconsistent usage of terms (e.g., Figure 3 and 4 (c)s)\n - For better understanding, it requires to compare the patterns of MINE loss and adversarial loss. \n - Experiments on more datasets such as animal, season, faces or USPS datasets. \n - What is the main difference in the results between DCGAN-based and UNet-based models?\n\n\nMinor\n - cicle_times symbol looks the product between distribution. But it should be defined before being used.\n - A reference of CycleGAN is incorrectly cited. \n - There are some typos in the paper.\n - page 1: dependent \u2192 depend\n - page 3: by separate \u2192 by separating\n - page 6: S a I \u2192 S and I\n\n\n1. Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-to-image translation networks, CoRR, abs/1703.00848, 2017\n2. Xun Huang, Ming-Yu Liu, Serge Belongie, Jan Kautz, Multimodal Unsupervised Image-to-Image Translation, CoRR. abs/1804.04732\n3. Hsin-Ying Lee, Hung-Yu Tseng, Jia-Bin Huang, Maneesh Kumar Singh, Ming-Hsuan Yang, Diverse Image-to-Image Translation via Disentangled Representations, ECCV 2018.\n", "rating": "3: Clear rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Nice problem formulation but limited model novelty and comparisons.", "review": "This paper formalizes the problem of unsupervised translation and proposes an augmented GAN framework which uses the mutual information to avoid the degenerate case.\n\nPros:\n* The formulation for the problem of unsupervised translation is insightful.\n* The paper is well written and easy to follow.\n\nCons:\n* The contribution to the GAN model of this paper is to add the mutual information penalty (MINE, Belghazi et al., 2018) to the GAN loss, which seems incremental. I also wonder if some perceptual losses or latent code regression constraint used in previous works [1,2] can also achieve the same goal.\n* Comparison to \u201cAugmented CycleGAN: Learning Many-to-Many Mappings from Unpaired Data\u201d should be done, since it\u2019s a close related work for unsupervised many-to-many image translation.\n* The visualization results of TI-GAN, TI-GAN+minI, CycleGAN should be listed with the same source input for fair and easy comparison. For example the failure case of figure 8 mentioned in Section 5.2 only appears in Figure 5 (1) not in Figure 5 (2). \n* Minor issues: 1) What does the full name of \u201cTI-GAN\u201d ? 2) Figure 6 is not mentioned in the experiments. 3) What does the \u201cFigure A\u201d mean in Section 4.2 ?\n\n[1] Multimodal Unsupervised Image-to-Image Translation, ECCV\u201918\n[2] Diverse Image-to-Image Translation via Disentangled Representations, ECCV\u201918\n\nOverall, this paper proposes a nice formulation for the problem of unsupervised translation. But the contribution to the GAN model seems incremental and comparisons to other methods are not enough. My initial rating is rejection.\n", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Good problem formulation, Not Novel method.", "review": "This paper formulated the problem of unsupervised one-to-many image translation and addressed the problem by minimizing the mutual information. A principle formulation of such problem is quite interesting. However, the novelty of this paper is limited. The proposed the method is a simple extension of InfoGAN, applied to image-to-image translation and replacing the mutual information part with MINE.\n\nThe experiments, which only include edge to shoes and MNIST to SVHN, are also not comprehensive and convincing. This paper also lacks discussion of several quite important related references for one-to-many image translation.\n\nXOGAN: One-to-Many Unsupervised Image-to-Image Translation\nToward Multimodal Image-to-Image Translation\n\n", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["rkxgF6RRC7", "SJeMJqfPR7", "ryeutYzPRQ", "SyeAA_Mw07", "r1lSJuMvR7"], "comment_cdate": [1543593336280, 1543084506058, 1543084416376, 1543084245618, 1543083996819], "comment_tcdate": [1543593336280, 1543084506058, 1543084416376, 1543084245618, 1543083996819], "comment_tmdate": [1543593336280, 1543084506058, 1543084416376, 1543084245618, 1543083996819], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2019/Conference/Paper1363/AnonReviewer2", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper1363/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper1363/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper1363/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper1363/Authors", "ICLR.cc/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Rating unchanged", "comment": "Thanks for your rebuttal. Some issues are fixed but the comparisons with some other works, e.g. perceptual losses, latent code regression constraint and Augmented CycleGAN, are not mentioned. I still think the novelty and comparisons are limited. So I keep the rating."}, {"title": "Thank you for detailed review and pointing out important flaws.", "comment": "Thank you AnonReviewer1 for the review and bringing some important points. Lack of comparison to existing models is answered in a general comment.\n\n> It is not clear how to use mutual information (MINE) for learning. There is no explicit definition of loss function considering MINE term. \nThe total generator loss combines GAN loss and MI; the latter is estimated between noise prior and generated sample. MINE is optimized concurrently to the GAN discriminator. We agree that explicitly stating TI-GAN objective is important; we will add it to the next revision of our paper.\n\n> It is difficult to read due to inconsistent usage of terms (e.g., Figure 3 and 4 (c)s)\nWe will fix the inconsistency on Figures 3 and 4 that you pointed out\n\n> For better understanding, it requires to compare the patterns of MINE loss and adversarial loss.\nThis is a good point. We will add a more thorough analysis of both MINE losses, which includes ablations studies and plots that evaluate the losses of each MINE estimator.\n\n> What is the main difference in the results between DCGAN-based and UNet-based models?\nUNet models achieved relatively good sample quality and disentanglement between semantics and SVHN-specific features. However, in comparison to DCGAN, the transfer was often incorrect. We will include qualitative results comparing the two architectures in the next version of our paper.\n\n> Minor comments\nWe will add all your recommendation in the next version of our paper.\n"}, {"title": "Thank you for a detailed review.", "comment": "Thank you AnonReviewer2 for the review. We refer to the lack of comparison in a general comment.\n\n> The visualization results of TI-GAN, TI-GAN+minI, CycleGAN should be listed with the same source input for fair and easy comparison. For example the failure case of figure 8 mentioned in Section 5.2 only appears in Figure 5 (1) not in Figure 5 (2). \n\nGood point, we will add that. However, we think that the results would reflect the same conclusion, that is Ti-GAN using a U-net architecture fails without the MI penalty.\n\n> What does the full name of \u201cTI-GAN\u201d ? \n\nTwo-Input GAN. We will make it more explicit in the paper.\n\n> Figure 6 is not mentioned in the experiments.\n\nIt should be mentioned, but due to a typo in the latex, we referenced figure 8 instead of figure 6. It will be fixed.\n\n> What does the \u201cFigure A\u201d mean in Section 4.2 ?\n\nWe meant to reference the figures in the appendix A. We will make it more explicit by referencing the figures directly.\n"}, {"title": "Reply to Reviewer 3. Our motivation is different from InfoGAN.", "comment": "Thank you AnonReviewer3 for the review. Lack of comparison is answered in a general comment. \n\n> The proposed method is a simple extension of InfoGAN, applied to image-to-image translation and replacing the mutual information part with MINE.\nThe purpose of using the mutual information in our paper is different from the one presented in InfoGAN. In our paper, we use the mutual information as a mean to penalize the model for uniquely using the information coming from the noise distribution and disregarding the source. \nAll I2I modes that aim to produce multimodal (many-to-many) transfer use some sort of prior noise to account for domain-specific features of the target domain (i.e. features not present in the source domain). This, however may lead to a failure mode, where learnt transfer function is agnostic to the source, as shown in Figure 7 (1).\n"}, {"title": "General comment", "comment": "We would like to thank all reviewers for their effort and pointing out important flaws of the paper. We agree that more comparisons with more recent I2I translation technique are needed. In particular, more in-depth study of the previous work would make it clearer that certain I2I tasks, especially ones that involve more geometric changes (such as MNIST to SVHN), are not yet solved, and that the proposed model addresses certain problems involved in such tasks in a novel way.\n"}], "comment_replyto": ["ryeutYzPRQ", "HkewPqKK2X", "r1xDSgvJ3Q", "H1gp9H-xhX", "B1GIQhCcYm"], "comment_url": ["https://openreview.net/forum?id=B1GIQhCcYm¬eId=rkxgF6RRC7", "https://openreview.net/forum?id=B1GIQhCcYm¬eId=SJeMJqfPR7", "https://openreview.net/forum?id=B1GIQhCcYm¬eId=ryeutYzPRQ", "https://openreview.net/forum?id=B1GIQhCcYm¬eId=SyeAA_Mw07", "https://openreview.net/forum?id=B1GIQhCcYm¬eId=r1lSJuMvR7"], "meta_review_cdate": 1544633277984, "meta_review_tcdate": 1544633277984, "meta_review_tmdate": 1545354502282, "meta_review_ddate ": null, "meta_review_title": "Lack of novelty", "meta_review_metareview": "The paper formulates the problem of unsupervised one-to-many image translation and addresses the problem by minimizing the mutual information.\n\nThe reviewers and AC note the critical limitation of novelty and comparison of this paper to meet the high standard of ICLR. \n\nAC decided that the authors need more works to publish.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper1363/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1GIQhCcYm¬eId=H1xLpi20J4"], "decision": "Reject"}