{"forum": "B1MX5j0cFX", "submission_url": "https://openreview.net/forum?id=B1MX5j0cFX", "submission_content": {"title": "Universal Attacks on Equivariant Networks", "abstract": "Adversarial attacks on neural networks perturb the input at test time in order to fool trained and deployed neural network models. Most attacks such as gradient-based Fast Gradient Sign Method (FGSM) by Goodfellow et al. 2015 and DeepFool by Moosavi-Dezfooli et al. 2016 are input-dependent, small, pixel-wise perturbations, and they give different attack directions for different inputs. On the other hand, universal adversarial attacks are input-agnostic and the same attack works for most inputs. Translation or rotation-equivariant neural network models provide one approach to prevent universal attacks based on simple geometric transformations. In this paper, we observe an interesting spectral property shared by all of the above input-dependent, pixel-wise adversarial attacks on translation and rotation-equivariant networks. We exploit this property to get a single universal attack direction that fools the model on most inputs. Moreover, we show how to compute this universal attack direction using principal components of the existing input-dependent attacks on a very small sample of test inputs. We complement our empirical results by a theoretical justification, using matrix concentration inequalities and spectral perturbation bounds. We also empirically observe that the top few principal adversarial attack directions are nearly orthogonal to the top few principal invariant directions.\n", "keywords": ["adversarial", "equivariance", "universal", "rotation", "translation", "CNN", "GCNN"], "authorids": ["amitdesh@microsoft.com", "ksandeshk@cmi.ac.in", "kv@cmi.ac.in"], "authors": ["Amit Deshpande", "Sandesh Kamath", "K V Subrahmanyam"], "TL;DR": "Universal attacks on equivariant networks using a small sample of test data", "pdf": "/pdf/7146cf052a04170d4d2c120ce1247eaa6061b84b.pdf", "paperhash": "deshpande|universal_attacks_on_equivariant_networks", "_bibtex": "@misc{\ndeshpande2019universal,\ntitle={Universal Attacks on Equivariant Networks},\nauthor={Amit Deshpande and Sandesh Kamath and K V Subrahmanyam},\nyear={2019},\nurl={https://openreview.net/forum?id=B1MX5j0cFX},\n}"}, "submission_cdate": 1538087818942, "submission_tcdate": 1538087818942, "submission_tmdate": 1545355376051, "submission_ddate": null, "review_id": ["ryeezddw2X", "ryg8nMgR3Q", "Bkx4Djbh2Q"], "review_url": ["https://openreview.net/forum?id=B1MX5j0cFX¬eId=ryeezddw2X", "https://openreview.net/forum?id=B1MX5j0cFX¬eId=ryg8nMgR3Q", "https://openreview.net/forum?id=B1MX5j0cFX¬eId=Bkx4Djbh2Q"], "review_cdate": [1541011464422, 1541436077790, 1541311324068], "review_tcdate": [1541011464422, 1541436077790, 1541311324068], "review_tmdate": [1543251213132, 1541533924975, 1541533924777], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1MX5j0cFX", "B1MX5j0cFX", "B1MX5j0cFX"], "review_content": [{"title": "Interesting observations, but paper needs clarity in writing ", "review": "The paper presents some interesting observations related to the connection between the universal adversarial attacks on CNNs and spectral properties. While most of the results are empirical, the authors present two theorems to justify some of the observations. However, the paper is poorly written and very hard to read. Rather than providing too many plots/results in the main paper (maybe use supplementary matl.), the empirical results should be better explained to help the readers. Similarly, the implications of the theorems are not really clear and bit hand-wavy. \n\nxxxxxxxxxxxxxx\n\nIt seems that the authors provided a generic response to all the reviewers and I am not sure if they acknowledge the lack of clarity and lot of hand-wavy explanations in the paper. This issue has been raised by other reviewers too and is quite critical for becoming a good paper worthy for ICLR. Therefore, I am unable to update my score for this paper. However, I do appreciate the comparison with Moosavi-Dezfooli et al. (CVPR'17), this is a good addition as suggested by another reviewer. ", "rating": "5: Marginally below acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Principal directions towards universal attacks", "review": "This paper studies the problem of computing non-data specific perturbations, also known as universal perturbations, to attack neural networks and take profit of their inherent vulnerability. Compared to previous works in the domain, the authors look specifically at equivariant networks, and derive geometric insights and methods to compute universal perturbations for these networks. \n\nThe paper starts by analysing the main/principal directions of set of perturbations that are able to change the decisions in different forms of equivariant neural networks. With this heuristic study, a few main directions are shown to be shared by most adversarial perturbations. The authors then propose to construct universal perturbations built on the insights given by the principle directions of perturbations, which is an interesting an effective method. In addition, it is shown that a few adversarial samples are sufficient to identify pretty accurately the principle directions. The fooling rates achieved by this method is pretty good, which demonstrates that the proposed strategy is reasonable.\n\nThe key idea in this paper (using principal shared directions of perturbations, computed on a small subset of data points) has unfortunately already been proposed and tested in classical (non-equivariant) neural networks - see for example Fig 9 in Moosavi-Dezfooli, 2017, cited in the paper, and published in CVPR 2017. The present paper proposes however a few additional bits of information with a nice theoretical analysis, while the previous works were mostly based on heuristics. It is probably not sufficient however to pass the cut in ICLR. \n\nThe interesting additional novelty here is the study of equivariant networks. However, this ends up falling sort of initial expectations - there seems to be nothing specific to equivariant networks in the proposed study, and the solution and algorithm is actually applicable to any neural network architectures (?). Also, no specific insights are derived for equivariant networks, which could be potentially very interesting to make progress in understanding better equivariant representations, which still consist in a widely open research problem. \n\nIn general, the paper has a non-classical organisation, with a lot of heuristics that are not discussed in depth - that gives a sort of high-level impression that the proposed idea is potentially nice, but that but superficially addressed. It should probably be improved in the next versions of this work. ", "rating": "4: Ok but not good enough - rejection", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"title": "An interesting observation, but the contribution is not significant enough", "review": "The authors made an interesting observation: There's an important common subspace of Gradient/FGSM/Deepfool attacks among all examples. Therefore, they propose to use top SVD components of the directions to conduct universal attack. This is an interesting finding but also not surprising; we know the gradient of loss function w.r.t input can be used for interpretability, and in MNIST examples they usually reveals some rough shape of the class. This is also observed in Figure 8-13 in this paper, and thus it makes sense that the gradient directions share a common subspace. Therefore I think this observation itself is not significant enough. \n\nUsing this for universal attack is interesting, however the experiments are not that convincing: \n\n1. To show this is a good way for universal attack, I think the authors should compare with previous work in (Moosavi-Dezfooli et al). \n\n2. All the experiments are on MNIST. How about cifar/ImageNet? \n", "rating": "4: Ok but not good enough - rejection", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": ["H1gumUHtRm", "Syep8HSKAX", "rylRaSBFRm"], "comment_cdate": [1543226912346, 1543226708821, 1543226821785], "comment_tcdate": [1543226912346, 1543226708821, 1543226821785], "comment_tmdate": [1543226912346, 1543226846838, 1543226821785], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2019/Conference/Paper520/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper520/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper520/Authors", "ICLR.cc/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Revised with CIFAR10 experiments and comparison with Moosavi-Dezfooli et al.", "comment": "We show that the principal attack directions are nearly orthogonal to the principal invariant directions. Models learns invariance to rotations either when we explicitly use an equivariant network (GCNN, RotEqNet) or when we train any model (StdCNN, fully connected NN) with rotation augmentations or do both. We show a simple universal adversarial attack using the top principal component of any input-dependent attack direction on a small test sample. We show that even a simple approach of using the top singular vector of the gradients on a small sample of test points is comparable to the attack of Moosavi-Dezfooli et al. (CVPR'17). Moreover, the fooling rate of our universal attack gets better as the model is train-augmented with larger rotations."}, {"title": "Revised with CIFAR10 experiments and comparison with Moosavi-Dezfooli et al.", "comment": "We have modified the submission with the experiments on CIFAR 10 dataset asked by the reviewer. \n\nWe show that the principal attack directions are nearly orthogonal to the principal invariant directions. Models learns invariance to rotations either when we explicitly use an equivariant network (GCNN, RotEqNet) or when we train any model (StdCNN, fully connected NN) with rotation augmentations or do both. We show a simple universal adversarial attack using the top principal component of any input-dependent attack direction on a small test sample. We show that even a simple approach of using the top singular vector of the gradients on a small sample of test points is comparable to the attack of Moosavi-Dezfooli et al. (CVPR'17). Moreover, the fooling rate of our universal attack gets better as the model is train-augmented with larger rotations."}, {"title": "Revised with CIFAR10 experiments and comparison with Moosavi-Dezfooli et al.", "comment": "We show that the principal attack directions are nearly orthogonal to the principal invariant directions. Models learns invariance to rotations either when we explicitly use an equivariant network (GCNN, RotEqNet) or when we train any model (StdCNN, fully connected NN) with rotation augmentations or do both. We show a simple universal adversarial attack using the top principal component of any input-dependent attack direction on a small test sample. We show that even a simple approach of using the top singular vector of the gradients on a small sample of test points is comparable to the attack of Moosavi-Dezfooli et al. (CVPR'17). Moreover, the fooling rate of our universal attack gets better as the model is train-augmented with larger rotations."}], "comment_replyto": ["ryeezddw2X", "Bkx4Djbh2Q", "ryg8nMgR3Q"], "comment_url": ["https://openreview.net/forum?id=B1MX5j0cFX¬eId=H1gumUHtRm", "https://openreview.net/forum?id=B1MX5j0cFX¬eId=Syep8HSKAX", "https://openreview.net/forum?id=B1MX5j0cFX¬eId=rylRaSBFRm"], "meta_review_cdate": 1544854818909, "meta_review_tcdate": 1544854818909, "meta_review_tmdate": 1545354532637, "meta_review_ddate ": null, "meta_review_title": " Some interesting contribution, but there are significant exposition (and novelty) concerns", "meta_review_metareview": "The topic of universal adversarial perturbation is quite intriguing and fairly poorly studied and the paper provides a mix of new insights, both theoretical and empirical in nature. However, the significant presentation issues make it hard to properly understand and evaluate them. In particular, the theoretical part feels rushed and not sufficiently rigorous, and it is unclear why focusing on the case of equivariant network is crucial. Also, it would be useful if the authors put more effort in explaining how their contributions fit into the context of prior work in the area.\n\nOverall, this paper has a potential of becoming a solid contribution, once the above shortcomings are addressed.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper520/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1MX5j0cFX¬eId=SJxj7TGfgE"], "decision": "Reject"}