{"forum": "B1e8CsRctX", "submission_url": "https://openreview.net/forum?id=B1e8CsRctX", "submission_content": {"title": "Generative Ensembles for Robust Anomaly Detection", "abstract": "Deep generative models are capable of learning probability distributions over large, high-dimensional datasets such as images, video and natural language. Generative models trained on samples from p(x) ought to assign low likelihoods to out-of-distribution (OoD) samples from q(x), making them suitable for anomaly detection applications. We show that in practice, likelihood models are themselves susceptible to OoD errors, and even assign large likelihoods to images from other natural datasets. To mitigate these issues, we propose Generative Ensembles, a model-independent technique for OoD detection that combines density-based anomaly detection with uncertainty estimation. Our method outperforms ODIN and VIB baselines on image datasets, and achieves comparable performance to a classification model on the Kaggle Credit Fraud dataset.", "keywords": ["Anomaly Detection", "Uncertainty", "Out-of-Distribution", "Generative Models"], "authorids": ["hyunsunchoi@kaist.ac.kr", "ejang@google.com"], "authors": ["Hyunsun Choi", "Eric Jang"], "TL;DR": "We use generative models to perform out-of-distribution detection, and improve their robustness with uncertainty estimation.", "pdf": "/pdf/4d5bb7790da80a5224bb9b17051a26d51e6bc3ed.pdf", "paperhash": "choi|generative_ensembles_for_robust_anomaly_detection", "_bibtex": "@misc{\nchoi2019generative,\ntitle={Generative Ensembles for Robust Anomaly Detection},\nauthor={Hyunsun Choi and Eric Jang},\nyear={2019},\nurl={https://openreview.net/forum?id=B1e8CsRctX},\n}"}, "submission_cdate": 1538087886184, "submission_tcdate": 1538087886184, "submission_tmdate": 1545355396062, "submission_ddate": null, "review_id": ["SJxjubiqhm", "ryl3dTZYhX", "BJxgXKZdhQ"], "review_url": ["https://openreview.net/forum?id=B1e8CsRctX¬eId=SJxjubiqhm", "https://openreview.net/forum?id=B1e8CsRctX¬eId=ryl3dTZYhX", "https://openreview.net/forum?id=B1e8CsRctX¬eId=BJxgXKZdhQ"], "review_cdate": [1541218674718, 1541115252137, 1541048600322], "review_tcdate": [1541218674718, 1541115252137, 1541048600322], "review_tmdate": [1544100112787, 1544095720166, 1543196531079], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1e8CsRctX", "B1e8CsRctX", "B1e8CsRctX"], "review_content": [{"title": "Needs a lot of work on improving technical rigor and clarity", "review": "Note to Area Chair: Another paper submitted to ICLR under the title \u201cDo Deep Generative Models Know What They Don\u2019t Know?\u201d shares several similarities with the current submission.\n\nThis paper highlights a deficiency of current generative models in detecting out-of-distribution based samples based on likelihoods assigned by the model (in cases where the likelihoods are well-defined) or the discriminator distribution for GANs (where likelihoods are typically not defined). To remedy this deficiency, the paper proposes to use ensembles of generative models to obtain a robust WAIC criteria for anomaly detection.\n\nMy main concern is with the level of technical rigor of this work. Much of this has to do with the presentation, which reads to me more like a summary blog post rather than a technical paper.\n- I couldn\u2019t find a formal specification of the anomaly detection setup and how generative models are used for this task anywhere in the paper.\n- Section 2 seems to be the major contribution of this work. But it was very hard to understand what exactly is going on. What is the notation for the generative distribution? Introduction uses p_theta. Page 2, Paragraph 1 uses q_theta (x). Eq. (1) uses p_theta and then the following paragraphs use q_theta.\n- In Eq. (1), is theta a random variable?\n- How are generative ensembles trained? All the paper says is \u201cindependently trained\u201d. Is the parameter initialization different? Is the dataset shuffling different? Is the dataset sampled with replacement (as in bootstrapping)?\n- \u201cBy training an ensemble of GANs we can estimate the posterior distribution over model deciscion boundaries D_theta(x), or equivalently, the posterior distribution over alternate distributions q_theta. In other words, we can use uncertainty estimation on randomly sampled discriminators to de-correlate the OoD classification errors made by a single discriminator\u201d Why is the discriminator parameterized by theta? What is an ensemble of GANs? Multiple generators or multiple discriminators or both? What are \u201crandomly sampled discriminators\u201d? What do the authors mean by \"posterior distribution over alternate distributions\"?\n\nWith regards to the technical assessment, I have the following questions for the authors:\n- In Figure 1, how do the histograms look for the training distribution of CIFAR? If the histograms for train and test have an overlap much higher than the overlap between the train of CIFAR and test set of any other distribution, then ensembling seems unnecessary and anomaly detecting can simply be done via setting a maximum and a minimum threshold on the likelihood for a test point. In addition to the histograms, I'd be curious to see results with this baseline mechanism.\n- Why should the WAIC criteria weigh the mean and variance equally?\n- Did the authors actually try to fix the posterior collapse issue in Figure 3b using beta-VAEs as recommended? Given the simplicity of implementing beta-VAEs, this should be a rather easy experiment to include.\n\nMinor typos:\n- ODIN and VIB are not defined in the abstract\n- Page 3: \u201cdeciscion\u201d\n- Page 2, para 2: \u201clog_\\theta p(x)\u201d", "rating": "5: Marginally below acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Well below the ICLR level", "review": "- Novelty is minimal and is well below the level required by ICLR.\n\n- The reasoning lists the problems of GANs and then the fact that GAN ensembles would target that, based on a toy example in Figure 2. \n\n- Why to choose GANs though in the first place? Given the buildup, and given the other well-known training issues about GANs, are they the right choice for the basic modeling units, i.e. the ensemble units, in such case? A GANs adversary bases its comparisons on individual data points, rather than on distribution comparisons or on groups of points like MMD, etc. I understand the reasoning behind the choice of generative models (GMs), but it is choosing GANs out of the set of GMs in this particular case that I am referring to. \n\n- The paper is quite well written. The ideas as well as the reasoning flow very smoothly. \n\n- Experiments are well prepared. \n\nRather minor:\n- page 1: \"When training and test distributions differ, neural networks may provide ...\" This is true but may be a clarification here regarding the fact that the neural networks involved with several modeling problems, e.g. the ones trained for domain adaptation or meta-learning tasks, target this shift or difference in domains, and typically provide a way to tackle this problem.\n\n\n\nUodate: Read the rebuttal. My score remains unchanged. ", "rating": "4: Ok but not good enough - rejection", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"title": "Interesting combination of the previous work with useful results.", "review": "The authors present an OOD detection scheme with an ensemble of generative models. When the exact likelihood is available from the generative model, the authors approximate the WAIC score. For GAN models, the authors compute the variance over the discriminators for any given input. They show that this method outperforms ODIN and VIB on image datasets and also achieves comparable performance on Kaggle Credit Fraud dataset.\n\nThe paper is overall well-written and easy to follow. I only have a few comments about the work.\n\nI think the authors should address the following points in the paper.\n- What is the size of the ensemble for the experiments?\n- How does the size of the ensemble influence the measured performance?\n- It is Fast Gradient Sign Method (FGSM), not FSGM. See [1]. Citing [1] for FGSM would also be appropriate.\n\nQuality. The submission is technically sound. The empirical results support the claims, and the authors discuss the failure cases. \nClarity. The paper is well-written and easy to follow while providing useful insight and connecting previous work to the subject of study.\nOriginality. To the best my knowledge, the proposed approach is a novel combination of well-known techniques.\nSignificance. The presented idea improves over the state-of-the-art.\n\n\nReferences\n[1] I. Goodfellow, J. Shlens, and C. Szegedy, \u201cExplaining and Harnessing Adversarial Examples,\u201d in ICLR, 2015.\n-------------------\nRevision. The rating revised to 6 after the discussion and rebuttal.\n ", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["S1eGNUWDk4", "S1gUWF9U1N", "S1gbeoTt0X", "B1gvpV3tAQ", "SyefX0ayAX", "ByxCtppk0m", "SyxJSTTy0Q", "r1xAPnTy0X"], "comment_cdate": [1544128042038, 1544100094234, 1543260904532, 1543255230800, 1542606361612, 1542606213979, 1542606135283, 1542605925779], "comment_tcdate": [1544128042038, 1544100094234, 1543260904532, 1543255230800, 1542606361612, 1542606213979, 1542606135283, 1542605925779], "comment_tmdate": [1544128095437, 1544100094234, 1543260904532, 1543255230800, 1542606361612, 1542606213979, 1542606135283, 1542606064119], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2019/Conference/Paper898/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper898/AnonReviewer1", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper898/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper898/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper898/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper898/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper898/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper898/Authors", "ICLR.cc/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Re: Response", "comment": "Thank you for the detailed feedback. That's really helpful. \n\nRe: Histograms. We are a bit confused now as to what you mean by \"overlap\" based scoring rule. Under our experimental setup, anomaly detection is performed on a per-example basis from the test distribution. Although we eval AUROC on an empirical test set, we don't have access to the a population of test samples in the scoring rule. So it is not possible to compute \"overlap\" between two histograms of test points because we evaluate each test point independently.\n\nReferencing your earlier comment, it is possible to use training histograms to build a scoring rule. As you described, we can construct indicator function that classifies a data point as an anomaly if it has lower likelihood than the least probable training point or higher likelihood than the most probable training point (where training points are from the empirical training distribution). We can update our results with such a baseline (if this is what you intended), though as we've said before, MNIST and Fashion MNIST test distributions (and NotMNIST too) have considerably overlapping histograms, so it is doomed to fail. We don't think that this is a sufficiently strong baseline for the purposes of evaluating our method.\n\n- Re: As the number of data points n grows large, the Expectation of WAIC converges to generalization loss (which is a surrogate objective for KL distance between model and true distribution). See Eq 31. http://www.jmlr.org/papers/volume14/watanabe13a/watanabe13a.pdf and Watanabe 2009, 2010b for proofs. Now suppose we have a modified objective WAIC2 = log p(x) + alpha * Var[log p(x)]. For alpha != 1, this would result in a biased asymptotic estimate of generalization error.\n\nThat said, calibrating alpha according to a validation set might yield better AUROC. However, we avoid doing this in our experiments because it would presupposing an OoD distribution (validation set), which may lead to poor performance on the test OoD distribution (which may be different than validation set). Also, to making comparison to prior work easier (since AUROC is supposed to be threshold-independent for a single scalar), we didn't modify the WAIC score function. \n\n\n"}, {"title": "Response ", "comment": "Thanks for the response!\n\n- Some of my concerns regarding clarity have been addressed. Must note that clarity can still benefit from some more editing (a self-contained paper on anomaly detection will describe the experimental setup rather than just referring the reader to two other papers, the GAN notation of q_\\theta is clear to me now but is frankly unnecessary imo, details on how many ensembles were trained and how did they differ, etc.).\n\n- Re: Posterior Collapse. I also appreciate the results on this experiment.\n\nBased on the first two points, I have updated by score. However, I found the response to the other concerns rather dissatisfying.\n\n- Re: Histograms. Besides including the training likelihood results for the datasets in the submission, I think the AUROC based on an \"overlap\" based scoring rule is a very reasonable and important baseline to include before the expensive process of training ensembles.\n- WAIC. I think my question was orthogonal to the link you provide to. I was more interested in knowing why the mean and variance terms should be weighted equally, rather than having a hyperparameter controlling their strengths which could be decided based on e.g., a validation set. Some intuition/experiments in this regard would have been welcome."}, {"title": "Updated Fashion MNIST numbers to fix a bug.", "comment": "Our improved VAE experiments on Fashion MNIST had a minor evaluation bug in which some OoD test samples from Omniglot got mixed up into other distributions' evaluation. We've updated the paper to fix this error. After the rebuttal deadline, we'll update our related work section to discuss some of the GAN papers R2 mentioned in their recent comment. "}, {"title": "Explaining Adv Defense definition, and thanks for the references!", "comment": "Our perspective that \u201cAdversarial Defense is making ML robust to OoD inputs\u201d has been established in prior work on, see citations from prior work on model-independent interpretations of adversarial examples being \u201coff the data manifold\u201d (https://arxiv.org/abs/1801.02774 [8, 9, 10, 11]). In line with the ideas from https://arxiv.org/abs/1807.06732, we also argue that there are a lot of other \u201coff-manifold\u201d inputs besides Lp-norm perturbations. The presence of an adversary can also be regarded as \u201cworst case inputs\u201d, which is why we don\u2019t focus on whether OoD inputs originate from a human adversary or not. \n\nAs for whether model information can be merged into OoD samples: we agree that the OoD problem *is* model-independent -- constructing an OoD input does not *require* considering a model. But, this does not preclude the use of a model to construct an OoD input (after all, we have implicit models in our heads when we declare what an OoD input is with respect to the population data).\n\nOne type of adversarial example, which we explored in the paper, is to take a reference input that is constructed independently of the model (e.g. Gaussian noise) and perturb it according to a likelihood model which happens to be the one model (or ensemble member) we evaluate it on.\n\n--\nThank you for providing these references for our consideration. These papers all use adversarially trained generators to supply a discriminator with OoD inputs. As we already discussed in Section 2.1, the GAN perspective on anomaly detection is complicated by the fact that every GAN discriminator is typically regarded as an anomaly detector, but in practice is just a discriminative model between p(x) and some q(x) produced by randomized training dynamics. A single GAN discriminator is not a proper generative model for general OoD detection, since p(x)/q(x) is not very good at OoD detection on samples that lie in neither p(x) nor q(x). \n\nDespite our stated limitations of simply training a single GAN for OoD detection, the Schlegl et al., Deecke et al., and Kliger et al. works demonstrate good results on a OoD task definition setup to ours so we will revise our \u201cfirst work\u201d claim in the paper. Thanks for catching this error.\n\nThe Lee et al. work trains a GAN to provide OoD inputs a la adversarial augmentation, but is actually a model-dependent OoD detection-via-a predictive uncertainty metric method (e.g. like Deep Ensembles). \n\n"}, {"title": "Overall rebuttal comment from authors", "comment": "We thank the reviewers for helpful feedback and highlighting points of confusion in our paper.\n\nIn considering all 3 reviewers\u2019 comments (R1 \u201creads more like a summary blog post\u201d, R2 \u201cthe ideas as well as reasoning flow smoothly\u201d, R3 \u201cwell-written and easy to follow while providing useful insight and connecting previous work to the subject of study\u201d) we believe that all reviewers consider our presentation to be logically clear, but may be lacking in technical clarity (raised by Reviewer 1) or novelty (raised by Reviewer 2). There is especially some confusion regarding our notation and how it relates to GAN models for anomaly detection (e.g. \u201cposterior distribution over alternate distributions\u201d). \n\nTo address technical clarity issues raised by R1, we\u2019ve answered their questions in comments and made edits to our paper to make the problem setup and notation more clear. We\u2019ve responded directly to R2\u2019s comment on why we believe our work is novel. \n\nFinally, we\u2019ve updated the paper with improved VAE experiments on Fashion MNIST (confirming our hypothesis of posterior collapse).\n"}, {"title": "Thanks!", "comment": "We thank Reviewer 3 for the review and highlighting missing details from our paper. We\u2019ve added them into the paper.\n\n> - How does the size of the ensemble influence the measured performance?\n\nFor CIFAR10, we have found 5 ensembles to make a large difference over 3 ensembles (about .7 AUROC). There seem to be diminishing returns for models > 5.\n\n> - It is Fast Gradient Sign Method (FGSM), not FSGM. See [1]. Citing [1] for FGSM would also be appropriate.\n\nFixed, and already cited. Thanks! \n\n"}, {"title": "Addressing concerns about novelty and use of GANs", "comment": "We thank Reviewer 2 for their praise and raising concerns about novelty. It is an important point worth discussing.\n\nIn addition to proposing a superior method for anomaly detection, part of the novel contribution in this work involved synthesizing concepts from multiple fields likelihood estimation techniques from deep generative models, adversarial defense, model uncertainty, challenging discriminative anomaly detection methods and their relationship to GAN discriminators. \n\nWe tie these disparate concepts together into a unified perspective on the OoD problem. Therefore, we took great care into making sure the motivation of our work transitions smoothly, perhaps even to the point of stating the obvious to Reviewer 2. We emphasize that to our knowledge, our work is the first to extend our understanding of the OoD problem in context of prior work in generative modeling, Bayesian Deep Learning, and anomaly detection applications for modern generative models. These connections are not well known in the community and we hope that our paper will amend that. \n\nAdditional novel aspects of this work: The observation that density estimators (as implemented by a deep generative model) are NOT robust to OoD inputs themselves is a novel observation, concurrent with another ICLR submission. To our knowledge, we are also the first work to leverage the modern advancements in deep generative models to perform anomaly detection on high-dimensional inputs such as images. \n\nTo address R2\u2019s comments \u201cThe reasoning lists the problems of GANs\u201d and \u201cWhy to choose GANs though in the first place?\u201d, we emphasize that we are not saying GANs shouldn\u2019t be used for anomaly detection, only that their lack of exact likelihoods presents some challenges. We make an effort to make them work in our paper in our comparison to other generative model families.\n\n> - page 1: \"When training and test distributions differ, neural networks may provide ...\" \n\nThere are varying degrees of \u201cout-of-distribution-ness\u201d at test time. One way to carve up the problem specification is to consider inputs that (1) are different than the training set but you want the model to perform well on anyway, e.g. a subtle change in physics parameters a robot encounters when deployed. (2) inputs the model has no business classifying, i.e. showing a picture of a building to a cat/dog classifier. \n\nThe first situation is what you are describing, in which methods like sim2real, domain adaptation, meta-learning can address. As we stated in Section 3.1, our paper primarily deals with the second case, in which you don\u2019t want the model to give bogus outputs for bogus inputs, which also may be adversarial. We appreciate the feedback that this might be confusing if the reader is assuming problem formulation (1); we welcome the other reviewers to chime in here if it would make things more clear to state this.\n"}, {"title": "Addressed issues of technical clarity, performed follow-up experiments on posterior collapse", "comment": "Thank you for the detailed review and critique.\n\nWe agree that \u201cDo Deep ... They Don\u2019t Know?\u201d shares a concurrent discovery with us in identifying how generative models assign wrong likelihood to OoD inputs, and have updated our paper to cite their contribution. Our contributions differ in that their work performs analysis of why this phenomenon occurs, while we demonstrate that this can be fixed by using uncertainty estimation and WAIC, and then apply these fixed models to the OoD problem.\n\nWe agree that our paper could use more technical clarity, i.e. make this work easier to reproduce. The open-sourced code will be linked to the paper after double-blind review process, which we believe to be the highest standard of technical clarity when specifying our method and evaluation metrics. In the meantime, we\u2019ve also done the following:\n\n1. We\u2019ve clarified Section 4 to re-iterate that our anomaly detection problem specification is identical to that of Liang et al. 2017 and Alemi et al. 2017, and our evaluation metric (AUROC) is the same.\n\n2. Clarified the notation of our notation for p, q, p_theta, q_theta in the paper. We think that R1\u2019s confusion on our GAN ensemble setup can be addressed by clarifying the reasoning behind our terminology, and explaining a bit further what it means to \u201crandomly sample a discriminator from a posterior distribution over alternate distributions\u201d\n\nThe choice of terminology is motivated by our GAN variant of generative ensembles. If p(x) is the true generative distribution, p_theta(x) is some generative model\u2019s approximation of it. In Eq (1), theta is a (multivariate) random variable parameterizing an abstract generative model (e.g. weights in a neural network). We\u2019ve clarified this in the intro. \n\nIn the case of GANs, a subset of the variable theta parameterizes the generator and a subset of theta parameterizes the discriminator. Therefore, samples from the generator come from a generative distribution q_\\theta(x). We notate a GAN generator\u2019s distribution as q_\\theta(x) and not p_\\theta(x) (which we use for referring to normalizing flow and VAE likelihood models) is that in GANs, the discriminator is being optimized to learn a likelihood ratio p(x) / q_\\theta(x). That is, separating true data samples from p(x) from OoD samples from q_\\theta(x). \n\nThus, q(x) and q_theta(x) always refer to OoD distributions. This also makes discussion more clear in the context of discriminative anomaly detection classifiers (which learn p(x)/q(x)) and GAN discriminators (which learn p(x)/q_theta(x)).\n\nIn Section 2.1, we mention \u201crandomly sampled discriminators\u201d and \u201cposterior distribution over alternate distributions\u201d. Models (theta) trained under SGD can be assumed to be drawn randomly from some posterior distribution over p(theta|x). In a GAN, random variable theta specifies the alternate distribution q_\\theta(x), or equivalently, the implicit discriminator likelihood ratio p(x) / q_\\theta(x) (when the discriminator is trained with sigmoid cross entropy, which we do). Our GAN ensembles samples entire GANs (i.e. generator and discriminator) together, by training 5 GANs independently and then combining discriminator predictions for OoD classification. It would be problematic to sample only discriminators in the training process, since that does not change q_\\theta(x) (and there is the question of how feedback to the generators should be accomplished in this manner). \n\nTechnical assessment questions:\n\n- Re: Histograms. This is a reasonable suggestion, and resembles the interpretation of likelihood predictions as a feature, rather than a scoring function. The scoring function you propose is a min/max function over the distribution of features. Another approach would be a statistical hyppothesis test using the training distribution\u2019s likelihood predictions as the variable of interest. Unfortunately, the likelihoods of OoD distributions often overlap with the in-distribution test samples (MNIST and Fashion MNIST VAEs). In training a GLOW model, you will also find a gap between train and test likelihoods. So generative models are not good enough yet to reduce the generalization gap of likelihood models zero. \n\n- We refer the reviewer to \"Understanding predictive information criteria for Bayesian models\" (Gelman et al.) for a motivation of the WAIC objective. In short, the variance term is a correction for how much the fitting of k parameters will increase predictive accuracy, by chance alone. K is estimated by the variance. \n\n- Re: Posterior Collapse: Good suggestion! We went back to our VAE setup and ran a few follow-up experiments to prove this hypothesis. The short answer is that \u201cyes, decreasing Beta reduced posterior collapse and made things better\u201d. We\u2019ve edited section 4.1 to document our findings. \n\nMinor typos: They have been fixed in the latest revision. Thank you so much for catching these!"}], "comment_replyto": ["S1gUWF9U1N", "r1xAPnTy0X", "SyefX0ayAX", "rklO3mCu0X", "B1e8CsRctX", "BJxgXKZdhQ", "ryl3dTZYhX", "SJxjubiqhm"], "comment_url": ["https://openreview.net/forum?id=B1e8CsRctX¬eId=S1eGNUWDk4", "https://openreview.net/forum?id=B1e8CsRctX¬eId=S1gUWF9U1N", "https://openreview.net/forum?id=B1e8CsRctX¬eId=S1gbeoTt0X", "https://openreview.net/forum?id=B1e8CsRctX¬eId=B1gvpV3tAQ", "https://openreview.net/forum?id=B1e8CsRctX¬eId=SyefX0ayAX", "https://openreview.net/forum?id=B1e8CsRctX¬eId=ByxCtppk0m", "https://openreview.net/forum?id=B1e8CsRctX¬eId=SyxJSTTy0Q", "https://openreview.net/forum?id=B1e8CsRctX¬eId=r1xAPnTy0X"], "meta_review_cdate": 1544746035470, "meta_review_tcdate": 1544746035470, "meta_review_tmdate": 1545354515599, "meta_review_ddate ": null, "meta_review_title": "Promising but more work needed to reach maturity", "meta_review_metareview": "This paper suggests the use of generative ensembles for detecting out-of-distribution samples. \n\nThe reviewers found the paper easy to read, especially after the changes made during the rebuttal. However, further elaboration in the technical descriptions (and assumptions made) could make the work seem more mature, as R2 and R1 point out. \n\nThe general feeling by reading the reviews and discussions is that this is promising work that, nevertheless, needs some more novel elements. A possible avenue for increasing the contribution of the paper is to follow R1\u2019s advice to extract more convincing insights from the results. \n", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper898/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1e8CsRctX¬eId=rJeoVE_xlE"], "decision": "Reject"}