{"forum": "B1GAUs0cKQ", "submission_url": "https://openreview.net/forum?id=B1GAUs0cKQ", "submission_content": {"title": "Variance Networks: When Expectation Does Not Meet Your Expectations", "abstract": "Ordinary stochastic neural networks mostly rely on the expected values of their weights to make predictions, whereas the induced noise is mostly used to capture the uncertainty, prevent overfitting and slightly boost the performance through test-time averaging. In this paper, we introduce variance layers, a different kind of stochastic layers. Each weight of a variance layer follows a zero-mean distribution and is only parameterized by its variance. It means that each object is represented by a zero-mean distribution in the space of the activations. We show that such layers can learn surprisingly well, can serve as an efficient exploration tool in reinforcement learning tasks and provide a decent defense against adversarial attacks. We also show that a number of conventional Bayesian neural networks naturally converge to such zero-mean posteriors. We observe that in these cases such zero-mean parameterization leads to a much better training objective than more flexible conventional parameterizations where the mean is being learned.", "keywords": ["deep learning", "variational inference", "variational dropout"], "authorids": ["k.necludov@gmail.com", "dmolch111@gmail.com", "ars.ashuha@gmail.com", "vetrovd@yandex.ru"], "authors": ["Kirill Neklyudov", "Dmitry Molchanov", "Arsenii Ashukha", "Dmitry Vetrov"], "TL;DR": "It is possible to learn a zero-centered Gaussian distribution over the weights of a neural network by learning only variances, and it works surprisingly well.", "pdf": "/pdf/3befe31b06f3e6adab32966922f1df56500e8c08.pdf", "paperhash": "neklyudov|variance_networks_when_expectation_does_not_meet_your_expectations", "_bibtex": "@inproceedings{\nneklyudov2018variance,\ntitle={Variance Networks: When Expectation Does Not Meet Your Expectations},\nauthor={Kirill Neklyudov and Dmitry Molchanov and Arsenii Ashukha and Dmitry Vetrov},\nbooktitle={International Conference on Learning Representations},\nyear={2019},\nurl={https://openreview.net/forum?id=B1GAUs0cKQ},\n}"}, "submission_cdate": 1538087766138, "submission_tcdate": 1538087766138, "submission_tmdate": 1550491833568, "submission_ddate": null, "review_id": ["SkxrImCK2Q", "r1e0vhrKhX", "HygV2aqO3Q"], "review_url": ["https://openreview.net/forum?id=B1GAUs0cKQ¬eId=SkxrImCK2Q", "https://openreview.net/forum?id=B1GAUs0cKQ¬eId=r1e0vhrKhX", "https://openreview.net/forum?id=B1GAUs0cKQ¬eId=HygV2aqO3Q"], "review_cdate": [1541165900839, 1541131365595, 1541086636442], "review_tcdate": [1541165900839, 1541131365595, 1541086636442], "review_tmdate": [1541534182089, 1541534181888, 1541534181682], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1GAUs0cKQ", "B1GAUs0cKQ", "B1GAUs0cKQ"], "review_content": [{"title": "An interesting paper, but a few questions needed to be answered", "review": "This paper investigates the effects of mean of variational posterior and proposes variance layer, which only uses variance to store information.\n\nOverally, this paper analyzes an important but not well explored topic of variational dropout methods\u2014the mean propagation at test time, and discusses the effect of weight variance in building a variational posterior for Bayesian neural networks. This findings are interesting and I appreciate the analysis. \n\nHowever, I think the claim for benefits of variance layer is not well supported. Variance layer requires test-time averaging in test time to achieve competitive accuracy, while the additive case in Eq. (14) using mean propagation achieves similar performance (e.g., the results in Table 1). The results in Sec 6 lack comparison to other Bayesian methods (e.g., the additive case in Eq. (14)). \n\nBesides, there exists several problems which needs to be addressed.\n\nSec 5.\nSec 5 is a little hard to follow. Which prior is chosen to produce the results in Table 1? KL(q||p)=0 for the zero-mean case corresponds to the fact that the variational posterior equals the prior, which implies the ARD prior if I did not misunderstand. In this case, the ground truth posterior p(w|D) for different methods is different and corresponding ELBO for them are incomparable.\n\nSec 6. \nThe setting in Table 2 is also unclear. As ``Variance\u2019\u2019 stands for variational dropout, what does ``Dropout\u2019\u2019 means? The original Bernoulli dropout? Besides, I\u2019m wondering why directly variance layer (i.e., zero-mean case in Eq. (14)) is not implemented in this case.\n\n", "rating": "6: Marginally above acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "variance network uses a variational distribution with zero mean, but it achieves nice performances. ", "review": "This paper studies variance neural networks, which approximate the posterior of Bayesian neural networks with zero-mean Gaussian distributions. The inference results are surprisingly well though there is no information in the mean of the posterior. It further shows that the several variational dropout methods are closed related to the proposed method. The experiment indicates that the ELBO can actually better optimized with this restricted form of variational distribution. \n\nThe paper is clearly written and easy to follow. The technique in the paper is solid.\n\nHowever, the authors might need to clarify a few questions below. \n\n\nQ1: if every transformation is antisymmetric non-linear, then it seems that the expected distribution of $t$ in (2) is zero. Is this true or not? In another word, class information has to be read out from the encoding of instances in Fig 1. It seems antisymmetric operators cannot do so, as it will only get symmetric distributions from symmetric distributions. \n\nQ2: it is not straightforward to see why KL term needs to go zero. In my understanding, the posterior aims to fit two objectives: maximizing data likelihood and minimizing KL term. When the signal from the data is strong (e.g. large amount of data), the first objective becomes more important. Then q does not really try to make KL zero, and alpha has no reason to go infinity. Can you explain more? \n\nQ3: Is the claimed benefit from the optimization procedure or the special structure of the variance layer? Is it possible to test the hypothesis by 1) initializing a q distribution with learnable mean by the solution of variance neural network and then 2) optimizing q? Then the optimization procedure should continue to increase ELBO. Then compare the learned q against the variance neural network. If the learned q is better than the variance network -- it means the network structure is better for optimization, but the structure itself might not be so special. If the learned q is worse than the variance network, then the structure is interesting. \n\n\nA few detailed comments:\n\n1. logU used without definition. \n2. if the paper has a few sentence explaining \"Gaussian dropout approximate posterior\", section 4 will be smoother to read. ", "rating": "6: Marginally above acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Stochastic neural networks with zero mean posterior on weights", "review": "This paper introduced a new stochastic layer termed variance layer for Bayesian deep learning, where the posterior on weight is a zero-mean symmetric distribution (e.g., Gaussian, Bernoulli, Uniform). The paper showed that under 3 different prior distributions, the Gaussian Dropout layer can converge to variance layer. Experiments verified that it can achieve similar accuracies as conventional binary dropout in image classification and reinforcement learning tasks, is more robust to adversarial attacks, and can be used to sparsify deep models.\n\nPros:\n(1)\tProposed a new type of stochastic layer (variance layer)\n(2)\tCompetitive performance on a variety of tasks: image classification, robustness to adversarial attacks, reinforcement learning, model compression\n(3)\tTheoretically grounded algorithm\n\nCons:\n(1)\tMy main concern is verification. Most of the comparisons are between variance layer (zero-mean) and conventional binary dropout, while the main argument of the paper is that it\u2019s better to set Gaussian posterior\u2019s mean to zero. So in all the experiments the paper should compare zero-mean variance layer against variational dropout (neuron-wise Eq. 14) and sparse variational dropout (additive Eq. 14), where the mean isn\u2019t zero.\n(2)\tThe paper applies variance layers to some specific layers. Are there any guidelines to select which layers should be variance layers?\n\nSome minor issues:\n(1)\tPage 4, equations of Gaussian/Bernoulli/Uniform variance layer, they should be w_ij=\u2026, instead of q(w_ij)= \u2026\n(2)\tWhat\u2019s the prior distribution used in the experiment of Table 1?\n\n", "rating": "6: Marginally above acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["SyeR8Podp7", "ByloevouTQ", "Hkx_V8sdam"], "comment_cdate": [1542137685739, 1542137586840, 1542137391772], "comment_tcdate": [1542137685739, 1542137586840, 1542137391772], "comment_tmdate": [1542137685739, 1542137586840, 1542137391772], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2019/Conference/Paper222/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper222/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper222/Authors", "ICLR.cc/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Response to AnonReviewer3", "comment": "Thank you for your review and your questions!\n\n> (1) My main concern is verification. Most of the comparisons are between variance layer (zero-mean) and conventional binary dropout, while the main argument of the paper is that it\u2019s better to set Gaussian posterior\u2019s mean to zero. So in all the experiments the paper should compare zero-mean variance layer against variational dropout (neuron-wise Eq. 14) and sparse variational dropout (additive Eq. 14), where the mean isn\u2019t zero.\n\nUsually a fully-factorized Gaussian posterior achieves the same performance as the binary dropout posterior (e.g. shown in [1,2]), which we also observed in our experiments.\n\n> (2) The paper applies variance layers to some specific layers. Are there any guidelines to select which layers should be variance layers?\n\nNeural networks are usually not very stable to high amounts of noise in the first layers. Also, we have observed that it is hard to train a variance network with the last layer (right before softmax) being variance layer. Therefore a simple rule of thumb is to set the first layers to be conventional deterministic layers, then add several variance layers, and then add the last deterministic layer to obtain the logits.\n\n> (2) What\u2019s the prior distribution used in the experiment of Table 1?\n\nWe have used the log-uniform prior in this experiment. The result for the ARD prior is the same.\n\n\n[1] Gal, Yarin, and Zoubin Ghahramani. \"Dropout as a Bayesian approximation: Representing model uncertainty in deep learning.\" ICML 2016.\n[2] Louizos, Christos, and Max Welling. \"Multiplicative normalizing flows for variational bayesian neural networks.\" ICML 2017"}, {"title": "Response to AnonReviewer2", "comment": "Thank you for your review and your questions!\n\n> Q1: if every transformation is antisymmetric non-linear, then it seems that the expected distribution of $t$ in (2) is zero. Is this true or not? In another word, class information has to be read out from the encoding of instances in Fig 1. It seems antisymmetric operators cannot do so, as it will only get symmetric distributions from symmetric distributions.\n\nIf we have antisymmetric non-linearities, the expected value of each neuron of each layer is indeed zero. This would fail at the regression task, as the expected output of the network would be always zero. However, in multiclass classification, we use softmax to obtain predictions, so the posterior predictive distribution (the expected softmax) is non-trivial and allows to obtain reasonable predictions.\n\n> Q2: it is not straightforward to see why KL term needs to go zero. In my understanding, the posterior aims to fit two objectives: maximizing data likelihood and minimizing KL term. When the signal from the data is strong (e.g. large amount of data), the first objective becomes more important. Then q does not really try to make KL zero, and alpha has no reason to go infinity. Can you explain more?\n\nUnfortunately, in VI for Bayesian neural networks, the number of parameters is much larger than the amount of data, and the data-term in the ELBO gets overwhelmed by the KL-term. Most papers on VI in BNNs use some kind of tricks to avoid that: some downscale the KL-term (e.g. [1,3]), others restrict the variance of the approximate posterior (e.g. [1,2,4]) or underfit the ELBO in other ways. We do not use such tricks in this paper. This is one reason for alpha to go to infinity. Usually, it is not possible to set the KL to zero and retain good predictive performance with conventional priors. However, for the log-uniform and the ARD priors, the Argmin(KL(q(w)||p(w))) is a broad family of distributions, the zero-mean fully-factorized Gaussians. As we show, such family is enough to achieve a good predictive performance, so the overall objective is better: the KL is set to zero and the data-term is similar to the data-term of models with the full FFG posterior.\n\n> Q3: Is the claimed benefit from the optimization procedure or the special structure of the variance layer? Is it possible to test the hypothesis by 1) initializing a q distribution with learnable mean by the solution of variance neural network and then 2) optimizing q? Then the optimization procedure should continue to increase ELBO. Then compare the learned q against the variance neural network. If the learned q is better than the variance network -- it means the network structure is better for optimization, but the structure itself might not be so special. If the learned q is worse than the variance network, then the structure is interesting.\n\nWe did try to do it. The ELBO does not increase, and the network does not change: it is equivalent to fine-tuning the variances of the variance network. The variance network is a stable local optimum: if the data-term is already good enough, the KL term would prevent the means from increasing (when the mean mu is orders of magnitude smaller than the standard deviation sigma, the KL-term behaves like log|mu+eps| for a very small eps), and the data-term would not favor increasing mu in any way.\n\n[1] Kingma, Diederik P., Tim Salimans, and Max Welling. \"Variational dropout and the local reparameterization trick.\" NIPS 2015.\n[2] Louizos, Christos, and Max Welling. \"Multiplicative normalizing flows for variational bayesian neural networks.\" ICML 2017\n[3] Ullrich, Karen, Edward Meeds, and Max Welling. \"Soft weight-sharing for neural network compression.\" ICLR 2017\n[4] Blundell, Charles, et al. \"Weight uncertainty in neural networks.\" ICML 2015"}, {"title": "Response to AnonReviewer1", "comment": "Thank you for your review and your questions!\n\n> I think the claim for benefits of variance layer is not well supported. Variance layer requires test-time averaging in test time to achieve competitive accuracy, while the additive case in Eq. (14) using mean propagation achieves similar performance (e.g., the results in Table 1).\n\nMost techniques for training stochastic neural networks like dropout, variational inference or MCMC require test-time averaging for good uncertainty estimation. If the inference time is crucial, one may use distillation techniques to mimic the predictive distribution of the variance network with a fast deterministic DNN. If one is only interested in the accuracy, the variance networks are probably not the best way to go.\n\n> The results in Sec 6 lack comparison to other Bayesian methods (e.g., the additive case in Eq. (14)).\n\nUsually a fully-factorized Gaussian posterior achieves the same performance as the binary dropout posterior (e.g. shown in [1, 2]), which we also observed in our experiments. \n\n> Which prior is chosen to produce the results in Table 1? KL(q||p)=0 for the zero-mean case corresponds to the fact that the variational posterior equals the prior, which implies the ARD prior if I did not misunderstand. In this case, the ground truth posterior p(w|D) for different methods is different and corresponding ELBO for them are incomparable.\n\nWe have used the log-uniform prior in Table 1, however, the results for the ARD prior are the same. The result of this experiment can be discussed even without the Bayesian interpretation. Here we have 5 models with exactly the same objective function. Two of the models (weight-wise and additive) are equivalent and contain other models (neuron-wise, layer-wise, zero-mean) as special cases. We would expect more \"rich\" models to achieve a better value of the training objective. Surprisingly, in practice, we observe exactly the opposite.\n\n> The setting in Table 2 is also unclear. As ``Variance\u2019\u2019 stands for variational dropout, what does ``Dropout\u2019\u2019 means? The original Bernoulli dropout?\n\nYes, we compare to plain binary (Bernoulli) dropout. ''Variance'' stands for a variance network that is trained using variational dropout (we explicitly switch to the zero-mean parameterization during test time to obtain a variance network).\n\n> Besides, I\u2019m wondering why directly variance layer (i.e., zero-mean case in Eq. (14)) is not implemented in this case.\n\nIt is hard to train variance layers from scratch, whereas the training of variational dropout in layer-wise multiplicative parameterization is stable (see Appendix B). During test-time, we explicitly use the zero-mean parameterization to ensure that we obtain a true variance network.\n\n[1] Gal, Yarin, and Zoubin Ghahramani. \"Dropout as a Bayesian approximation: Representing model uncertainty in deep learning.\" ICML 2016.\n[2] Louizos, Christos, and Max Welling. \"Multiplicative normalizing flows for variational bayesian neural networks.\" ICML 2017"}], "comment_replyto": ["HygV2aqO3Q", "r1e0vhrKhX", "SkxrImCK2Q"], "comment_url": ["https://openreview.net/forum?id=B1GAUs0cKQ¬eId=SyeR8Podp7", "https://openreview.net/forum?id=B1GAUs0cKQ¬eId=ByloevouTQ", "https://openreview.net/forum?id=B1GAUs0cKQ¬eId=Hkx_V8sdam"], "meta_review_cdate": 1545002648892, "meta_review_tcdate": 1545002648892, "meta_review_tmdate": 1545354487085, "meta_review_ddate ": null, "meta_review_title": "Interesting and counter-intuitive result", "meta_review_metareview": "The authors describe a very counterintuitive type of layer: one with mean zero Gaussian weights. They show that various Bayesian deep learning algorithms tend to converge to layers of this variety. This work represents a step forward in our understanding of bayesian deep learning methods and potentially may shine light on how to improve those methods. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper222/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1GAUs0cKQ¬eId=rJe-iA8Vl4"], "decision": "Accept (Poster)"}