AMSR / conferences_raw /iclr20 /ICLR.cc_2020_Conference_B1eXvyHKwS.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
29.2 kB
{"forum": "B1eXvyHKwS", "submission_url": "https://openreview.net/forum?id=B1eXvyHKwS", "submission_content": {"authorids": ["yimingyang17@mails.ucas.edu.cn", "huzhang@microsoft.com", "wche@microsoft.com", "mazm@amt.ac.cn", "tie-yan.liu@microsoft.com"], "title": "THE EFFECT OF ADVERSARIAL TRAINING: A THEORETICAL CHARACTERIZATION", "authors": ["Mingyang Yi", "Huishuai Zhang", "Wei Chen", "Zhi-Ming Ma", "Tie-Yan Liu"], "pdf": "/pdf/ee4868fd166250dd1a6f17ca4d7e213ba7c25bae.pdf", "TL;DR": "We prove adversarial training within linear classifier can rapidly converge to a robust solution. In addition, adversarial training is stable to outliers in dataset. ", "abstract": "It has widely shown that adversarial training (Madry et al., 2018) is effective in defending adversarial attack empirically. However, the theoretical understanding of the difference between the solution of adversarial training and that of standard training is limited. In this paper, we characterize the solution of adversarial training for linear classification problem for a full range of adversarial radius \". Specifically, we show that if the data themselves are \u201d-strongly linearly-separable\u201d, adversarial\ntraining with radius smaller than \" converges to the hard margin solution of SVM with a faster rate than standard training. If the data themselves are not \u201d-strongly linearly-separable\u201d, we show that adversarial training with radius \" is stable to outliers while standard training is not. Moreover, we prove that the classifier returned by adversarial training with a large radius \" has low confidence in each data point. Experiments corroborate our theoretical finding well.", "keywords": ["adversarial training", "robustness", "separable data"], "paperhash": "yi|the_effect_of_adversarial_training_a_theoretical_characterization", "original_pdf": "/attachment/ee4868fd166250dd1a6f17ca4d7e213ba7c25bae.pdf", "_bibtex": "@misc{\nyi2020the,\ntitle={{\\{}THE{\\}} {\\{}EFFECT{\\}} {\\{}OF{\\}} {\\{}ADVERSARIAL{\\}} {\\{}TRAINING{\\}}: A {\\{}THEORETICAL{\\}} {\\{}CHARACTERIZATION{\\}}},\nauthor={Mingyang Yi and Huishuai Zhang and Wei Chen and Zhi-Ming Ma and Tie-Yan Liu},\nyear={2020},\nurl={https://openreview.net/forum?id=B1eXvyHKwS}\n}"}, "submission_cdate": 1569439579316, "submission_tcdate": 1569439579316, "submission_tmdate": 1577168230155, "submission_ddate": null, "review_id": ["r1gP-GPSqr", "Bkeuqk7NtS", "BygvfpMCFS"], "review_url": ["https://openreview.net/forum?id=B1eXvyHKwS&noteId=r1gP-GPSqr", "https://openreview.net/forum?id=B1eXvyHKwS&noteId=Bkeuqk7NtS", "https://openreview.net/forum?id=B1eXvyHKwS&noteId=BygvfpMCFS"], "review_cdate": [1572332031213, 1571200911563, 1571855630865], "review_tcdate": [1572332031213, 1571200911563, 1571855630865], "review_tmdate": [1574514721834, 1574222078104, 1572972426904], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2020/Conference/Paper1761/AnonReviewer1"], ["ICLR.cc/2020/Conference/Paper1761/AnonReviewer3"], ["ICLR.cc/2020/Conference/Paper1761/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1eXvyHKwS", "B1eXvyHKwS", "B1eXvyHKwS"], "review_content": [{"experience_assessment": "I have read many papers in this area.", "rating": "1: Reject", "review_assessment:_checking_correctness_of_experiments": "I did not assess the experiments.", "review_assessment:_thoroughness_in_paper_reading": "I made a quick assessment of this paper.", "title": "Official Blind Review #1", "review": "The aim of this paper is to provide a theoretical analysis of adversarial training under the linear classification setting. The main result states that, under many technical assumptions, adversarial training using gradient descent may converge to the hard margin SVM classifier with a fast rate. Here \"fast\" is not the standard 1/T fast rates but, rather, a rate of o(1/log T) (in comparison to recent results that looked into the convergence of gradient descent with logistic loss to the hard-margin SVM solution).\n\nOverall, the paper is not recommended for publication for many reasons. \n\nFirst, the notation used is sometimes imprecise and in some cases it is entirely wrong. For example, the authors decided to use x_i to replace the product y _i x_i in order to \"simplify\" notation. This makes things really hard to follow. The authors need to use a different symbol, such as z to stand for the product yx. Second, some equations do not appear to be correctly typed (e.g. in Page 1, standard learning should have xi not x). Third, the authors use epsilon to denote two different things (one for the definition of linearly separable data and one for the robustness radius), and so on. \n\nSecond, the paper needs to be proof-read. It has a lot of typos and grammatical errors that make sentences difficult to understand. Examples include: \n- \"while there several outliers are not or even not linearly separable\", \n- \"a simple generalization error bound informs the high loss on test set\"\n\nThird, some of the mathematical results do not make sense. For example, Definition 1 has two conditions, the first one is immediately satisfied once the second condition is satisfied, so why both? Also, in Proposition 1, the authors should mention that both sets are subsets of a linearly separable superset. In that case, the conclusion of Proposition 1 is obvious. Definitely, if data are linearly separable, then the hard margin will shrink as more examples are added (it cannot increase by definition of \"maximum\" margin). Moreover, in Example 1, the authors conclude with an inequality of norms and I don't see how this follows from the description of the example. The example is generic and there is nothing that indicates one norm would be larger than the other. \n\nForth, the authors make many assumptions about the loss without mentioning one example that satisfies them. In fact, the loss function they used later in the experiments violates Assumption 2.\n\nSome additional comments: \n- How did the authors arrive at Eq 4? I don't see how this follows from the assumptions. Can you please elaborate? \n- I am not aware of any book written by Vapnik in 1995 called \"Convex Optimization\". I think the authors meant the SVM paper. \n\nGiven all of these issues and the fact that the main result is incremental and holds under a very limited setting (homogenous linear classifiers under a strong separability assumption) and relies on very strong assumptions about the loss function that may not be achievable to begin with, I do not recommend acceptance. \n\n=========== \n#Post Rebuttal Remarks\n\nRegarding the rate of convergence, just to be clear, I was only clarifying what the term \"fast\" meant in the paper, not complaining. So, this did not affect my score. The notation issues need to be fixed and the authors need to ensure that mathematical equations are written precisely, as stated in my review. \n\nThanks for clarifying the issue about the loss functions. I agree that the example they mentioned satisfy it but it is not a common loss used in practice.", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory."}, {"experience_assessment": "I do not know much about this area.", "rating": "1: Reject", "review_assessment:_checking_correctness_of_experiments": "I did not assess the experiments.", "review_assessment:_thoroughness_in_paper_reading": "I made a quick assessment of this paper.", "title": "Official Blind Review #3", "review": "TL;DR: The paper gives interesting, theoretical results to adversarial training. The paper only uses linear classifiers, which are hardly the same problem as deep networks where adversarial attacks are problematic. Some conclusions from theorems can be vague or informal, and therefore are not very convincing. I vote for rejecting this paper since it is hard to claim it informs deep learning research (the motivating reason for doing adversarial training). However, I am not familiar with theoretical analysis of adversarial attack/defense, so I am open to counter-arguments.\n\n=================\n1. What is the specific question/problem tackled by the paper?\nThe paper gives a theoretical analysis to the theoretically less-studied procedure of adversarial training, and shows properties of adversarial training in comparison to regular training, for both linearly separable data or inseparable data. The paper sheds light on some empirical behavior of adversarially trained networks, namely that they are more robust to outliers and lower in performance.\n\n2. Is the approach well motivated, including being well-placed in the literature?\nI am not an expert of adversarial samples, so I am ill-equipped to judge the novelty of the paper. The research direction itself is well motivated, and in the realm of deep learning, it is posed as a first paper to theoretically analyze adversarial training.\nHowever, the authors only analyzed linear classifiers. This makes the results of the paper ill-suited for deep networks, whose non-linearity is arguably the reason why adversarial samples are such a problem. The motivation of the paper is thus greatly diminished. For linear classifiers, I do not know if there are existing work on their robustness when perturbation of samples are being trained on, but to be well-placed in the literature, the authors must either claim there is none, or cite those papers.\n\n3. Does the paper support the claims? This includes determining if results, whether theoretical or empirical, are correct and if they are scientifically rigorous.\nClaims and novelties in this paper include:\n(1) Adversarial training converges faster than regular training if samples are \u03b5-strongly linearly separable,\n(2) If samples are not \"\u03b5-strongly linearly separable\", adversarial training is robust to outliers, while regular training is not,\n(3) Confidence is low for all (training) samples if \u03b5 is large. \n\nOnly (1) seems to be sufficiently proved. I am not certain that this is a very useful result, and I am open to counter-arguments.\n(2) and (3) have steps that are vague and informal:\n\n(2) That regular training is susceptible to outliers is proof by example and people already know that. Also the claim relies on the assumption that pi^1 and pj^2 are on the same scale, while those samples that violate the decision boundary can have arbitrarily large pi^1 or pj^2. Since outliers are often the violators, and inliers often are not, a small number of carefully placed outliers can make ||p2|| quite large while each pi^1 can be very small. It is also worth noting the logic seems to boil down to \"N1>>N2, so inliers should overwhelm outliers, making the training robust\". The claim is not guaranteed.\n\n(3) I am not very sure if I understand it correctly, but the logic seems to be that the logits are bounded, so they cannot be too large, and so the confidence is low. However, the bound also involves the magnitude of w and x from the other class, so the final step of the proof is either unclear or the bound can indeed be quite large. Note that |wx| does not need to be very large for the confidence to be high (e.g. if logit is 5, the confidence is 1/(1+e^-5)=99.3%). The claim also relies on the assumption that epsilon is larger than the distance between the farthest points in the dataset, which is extreme since you can find an adversarial sample that can be considered to be simultaneously \"close enough\" to the two most dissimilar samples in the dataset.\n\nIn the end, the results are not very convincing or useful for informing deep learning research.\n\n=============\nTo improve paper:\n- Clarify motivation and how this would inform adversarial training highly non-linear classifiers;\n- Add related work for robustness to perturbation of linear models, or state that they don't exist;\n- Clarify weaknesses in the claims.\n\nEditorial changes:\nDefinition 2: \"logit\" <--- people call this probability estimates; logits are wTx\nSec. 4.1.2 needs to clarify what k in x_i^k means -- it's continued from proposition 1 which at first glance is irrelevant\n\n\n=================\nPost rebuttal\n\nApologies for not interacting earlier due to deadlines. \nThe rebuttal does not address my major concern (motivation), nor does it discuss its relationship with related work. The math questions are not answered very clearly; I do not see how section 4.1.2 proves p_i^1 has the same scale as p_j^2, except maybe that they are all smaller than some constant for certain samples. And other discussions in the rebuttal simply confirms my concern. \nIn summary, I think this paper is not yet ready for publication.\n\n", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory."}, {"experience_assessment": "I have published one or two papers in this area.", "rating": "1: Reject", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "title": "Official Blind Review #2", "review_assessment:_checking_correctness_of_derivations_and_theory": "I did not assess the derivations or theory.", "review": "This paper provides some analyses of the difference between adversarial training and standard training for linear classification problem. In particular, it proves that when the data is \\eps linearly separable, adversarial training converges faster than standard trading. It also argues that when the data is not \\eps linearly separable, adversarial training is more robust to outlier. Simulations are constructed to verify the arguments in the paper but there is no experiments on real dataset. \n\nThe first result of this paper is interesting, that adversarial training converges faster than standard training. Studying the difference between the convergent points between adversarial training and standard training is also an interesting research problem. However, I still have two main concerns about the current version of the paper.\n1. The paper is trying to develop rigorous results, but its writing is arguably not rigorous. Many statement are not clear and some notations are used without definition. Section 4.1 has many vague statements. See more concrete comments below. \n2. I am not sure about the significance of the results in the paper. The results highly depend on the linear setting with convex losses. More than that, Theorem 1 assumes the \\eps strongly linear separable, and Theorem 2 assumes a large \\eps (if the statement is that |w* x_{k,i}| is less than a large number, it seems much less interesting). These are very strong assumptions that are usually not true in practice. Experimental results only cover carefully designed simulation as well.\n\nDetailed comments for item 1 above:\n1. Assumption 3, what is the quantifier for w? Is it for every w? There exists some w? How do you guarantee by \u201crescale the norm of w\u201d (from the footnote) to make sure that c_1 is not -\\infty?\n2. Lemma 1, this is for every x_i, or some particular x_i?\n3. What is w(t)?\n4. What is the condition on \\eta in Theorem 1? Why it is O(\\eta) in equation (11) or equation (24)?\n5. The claims in section 4.1 seem to be depended on carefully designed examples. Would it still true rigorously for general cases? \n6. In section 4.1.2 first paragraph, why ||w_t|| can not go to infinity? in the third paragraph, how assumption 2 implies p^k_i / p^k`_j = o(1), or later p^k_i / p^k`_j = O(1)?\n7. In section 4.2 second paragraph, what is \u201ck_th category\u201d?\n8. Is w^* unique in Theorem 2?\n"}], "comment_id": ["B1lw6TI3iS", "rkgmGXzusS", "rkxcy7zdoS", "Syevazz_or"], "comment_cdate": [1573838270752, 1573557003163, 1573556961778, 1573556927261], "comment_tcdate": [1573838270752, 1573557003163, 1573556961778, 1573556927261], "comment_tmdate": [1573838270752, 1573557014017, 1573556961778, 1573556927261], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2020/Conference/Paper1761/AnonReviewer2", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1761/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1761/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1761/Authors", "ICLR.cc/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "After Rebuttal", "comment": "I don't think the response addresses my questions, nor the paper is ready for publication. \n\nJust to list a few here. In the response Lemma 1 is used for multiple questions. Note that in Lemma 1 the dataset is assumed to be \u201c\\eps strongly linear separable\u201d. Why can it still be used in section 4? Since w* may not be unique, how do I guarantee the convergence of the algorithm? I don't think clipping can be seen as rescaling. Also, will the claims in the paper still hold given the clipping?\n\n"}, {"title": "Thanks for your review, the following are our responses to your questions", "comment": "\u201cFor linear classifiers, I do not know if there are existing work on their robustness when perturbation of samples are being trained on, but to be well-placed in the literature, the authors must either claim there is none, or cite those papers\u201d\nTo the best of our knowledge, theoretical analysis to the limited point of adversarial training only appeared in paper Ilyas et al., 2019 which we give a discussion between our difference in the last of section 4.1.2. The robustness of limited point is a hard problem, even under linear framework and standard training. Some recent works study it since 2017 i.e. Soudry et al. (2017), Ji & Telgarsky (2018). But this is the first paper to discuss in for adversarial training. \n\n\u201cThat regular training is susceptible to outliers is proof by example and people already know that\u2026\u201d\nThe standard training is susceptible to outliers is provided in proposition 1 and illustrated by example 1 and example 2. The stability of adversarial training is not build on the assumption \u201cp_{i}^{1} has same scale with p_{j}^{2}\u201d. On the contrary, we show that p_{i}^{1} has same scale with p_{j}^{2} in the first paragraph of section 4.1.2. This is because the limited point of adversarial training w^{*} has a finite norm, otherwise it will end up with a infinite loss due to the non-separability of data.\n\n\u201cI am not very sure if I understand it correctly, but the logic seems to be that the logits are bounded\u201d\nEquation (16) can be viewed as the confidence e^{|w^{T}x_{k,i}|} will be upper bounded by the average loss among the opposite class (smaller than some constant). Our result is e^{|w^{T}x_{k,i}|} can be upper bounded rather than |w^{T}x_{k,i}|.\nThe assumption of large \\epsilon is a technical assumption in order to give a quantitively description of confidence. A relatively smaller \\epsilon can also correspond with low confidence in practice. \n\nDefinition 2: \"logit\" <--- people call this probability estimates; logits are wTx Sec. 4.1.2 needs to clarify what k in x_i^k means -- it's continued from proposition 1 which at first glance is irrelevant.\n\tWe will revise the notations accordingly to make this paper more friendly to read in the next version. \n"}, {"title": "Thanks for your review, the following are our responses to your questions", "comment": "\u201cSimulations are constructed to verify the arguments in the paper but there is no experiments on real dataset.\u201d\n\tA series of experiments on CIFAR10 related to our results are delegated to appendix E.2.\n\u201cTheorem 1 assumes the \\eps strongly linear separable, and Theorem 2 assumes a large \\eps\u201d\nThe assumption \u201c\\eps strongly linear separable\u201d is equivalent to linearly separable which implies there must exist a \\epsilon>0 such that w^{*T}x_{i} \\geq \\epsilon \\|w^{*}\\|. We can accordingly adjust the adversarial radius, then it is a medium assumption. The assumption of large \\epsilon in theorem 2 tries to give a quantitively characterization to confidence of classifier obtained by adversarial training. A slightly small \\epsilon can also correspond with similar observation in practice. We will try to get rid of this technical assumption in the next version. \n\n\u201cHow do you guarantee by \u201crescale the norm of w\u201d (from the footnote) to make sure that c_1 is not -\\infty\u201d\nWe should highlight that assumption only appears in theorem 1. The condition is used to ensure L-smoothness of adversarial loss l(w). For every classifier w, we can rescale w as w/\\|w\\| \\max{\\|w\\|, c} for some small constant c. Then, \\|w\\| can not equal to zero. Besides that, we can similarly clip the norm of w if w^{T}x_i - \\epsilon\\|w\\| smaller than some negative constant. Then we can ensure c_1 > -\\infty. As a matter of fact, we can omit the rescale procedure if the loss function l(u) is \\log{1+e^{-u}} or initial learning is smaller than 1/L(w(0)), where L(w(0)) is the local Lipschitz constant of point w(0). \n\n\u201cLemma 1, this is for every x_i, or some particular x_i\u201d\nThe third equation in lemma 1 is for every x_i. It shows that w_t will converge to a point with zero adversarial training loss according to our assumption to the loss function l(u).\n\n\u201cWhat is w(t). What is the condition on \\eta in Theorem 1? Why it is O(\\eta) in equation (11) or equation (24)\u201d\nw(t) is defined in equation (6), which is the gradient flow iterates of adversarial training. The behavior of gradient flow iterates can approximate the real iterates obtained by gradient descent w_t here. In theorem 1, we characterize the w(t) to reveal behavior of the real gradient descent iterates w_t. The error bound is given by the learning rate \\eta. Hence the O(\\eta) in equation (11) and (24) are estimation error between gradient flow iterates w(t) and gradient iterates w_t.\n\n\u201cThe claims in section 4.1 seem to be depended on carefully designed examples. Would it still true rigorously for general cases?\u201d\nThe two examples are used to illustrate the unstability of standard training, a general description is presented in proposition 1. The stability of adversarial training is delegated to equation (15).\n\n\u201cIn section 4.1.2 first paragraph, why \\|w_t\\| can not go to infinity? in the third paragraph, how assumption 2 implies p^k_i / p^k_j = o(1), or later p^k_i / p^k_j = O(1)?\u201d\nAssumption 2 informs that \\lim_{u\\to\\infty} l(u) = 0, since we assume the data are not separable, then there exists a x_i such that w^{T}x_i - \\epsilon \\|w\\|\\leq 0 for each w. The iterates w_t will converge to a minimum according to lemma 1, while \\|w_t\\| goes to infinity will rends w^{T}(t)x_i - \\epsilon \\|w(t)\\| goes to -\\infinity. Then we will ends up with infinite loss. A more detailed description is referred to paragraph 1 in section 4.1.2. \nWithout loss of generality, we use l(u)=e^{-u} as an illustration. p^{k}_{i} = e^{-w^{*T}x_{i}^{k} + \\epsilon\\|w^{*}\\|}. Then p^{k}_{i} / p^{k\u2019}_{j} = \\exp{-w^{*T}(x_{i}^{k} \u2013 x_{j}^{k})}. The minimum w^{*} has infinite norm which makes the loss goes to zero. Then, p^{k}_{i} / p^{k\u2019}_{j} = o(1), since x_{i}^{k} is non-support vector and x_{j}^{k\u2019} is support vector (w^{*T}(x_{i}^{k} \u2013 x_{j}^{k})<0). But for adversarial training \\|w^{*}\\| is not infinity then p_{i}^{k} will have the same scale for each I and k.\n\n\u201cIn section 4.2 second paragraph, what is \u201ck_th category\u201d?\u201d\n\tIt represents the data is from the first or the second class.\n\n\u201cIs w^* unique in Theorem 2?\u201d\n\tw^{*} is the hard margin solution, it can be non-unique.\n"}, {"title": "Thanks for your review, the following are our responses to your questions", "comment": "\u201cHere \"fast\" is not the standard 1/T fast rates but, rather, a rate of o(1/log T)\u201d\nWe try to reveal the actual convergence rate of adversarial training to robust solution rather than developing a faster algorithm to obtain a robust solution. Hence, we think the result is valuable, even though the promotion is not surprising large to O(1/T).\n\u201cthe authors decided to use x_i to replace the product y _i x_i in order to \"simplify\" notation.\u201d\nWe would like to clarify the notations here and accordingly revise them in the paper. First, representing y_i x_i by x_i is out of the consideration that x_i and y_i are always appear as x_i y_i and w^{T}x_i y_i means the data is correctly classified. The symbol will not mislead the notations. We will substitute the \\epsilon in definition 2 with some other symbol. \n\u201cThird, some of the mathematical results do not make sense\u201d\nThe two conditions in definition 1 are used to emphasis that w^{*} can not only make a correct classification for each data but also ensure each data is away from the margin larger than \\epsilon. We will get rid of the first condition in the next version. The condition that union data is linearly separable is necessary to make proposition meaningful. We will then add it. The example 1 is used to give an intuitively explanation to proposition 1. It can be concluded as the hard margin solution can be sensitive to outliers closely with each other, but standard training can converge to hard margin solution which informs the unstability of standard training. The equation (12) is a direct induction to reveal the x_{i}^{1} will locate closely to the new hard margin solution (\\hat{w}_{1}, \\hat{w}_{2}) due to some outliers \\x_{i}^{2} are added, while they locate away from their original hard margin solution (\\hat{\\w}_{1}, 0). \n\u201cForth, the authors make many assumptions about the loss without mentioning one example that satisfies them\u201d\nWe enumerate some loss functions satisfy our assumptions in the footnote of page 3. For example, choosing loss function l(u) as e^{-u} or \\log{1+e^{-u}} can satisfy with our assumptions. Also please notice that assumption 2 exponential tail focuses on the behavior of loss function l(u) when u is large. We chose l(u)=\\log{1+e^{-u}} which will closely to e^{-u} when u is large. A simple inequality that \\log{1+x} \\geq x \u2013 x^{2}/2 can give the exact conclusion. \n\u201cHow did the authors arrive at Eq 4\u201d\nIt is a directly result according to the linear classifier and monotonically decreasing property of loss function l(u).\n\n\u201cunder a very limited setting (homogenous linear classifiers under a strong separability assumption)\u201d\nOur assumptions about loss function can be achieved by the most generally used loss functions i.e. e^{-u}, \\log{1+e^{-u}}. Also about the assumption \u201cstrongly separability\u201d, it is equivalent to the data are linearly separable, because if the data are separable, then there must exist a \\epsilon>0 such that w^{*T}x_{i} \\geq \\epsilon\\|\\w^{*}\\|. Then the data can satisfy our \u201c\\epsilon-strongly linearly separable\u201d. Hence, our core assumption is the data are themselves linearly separable. Besides that, we also give a discussion to adversarial training on non-linearly separable data. \n"}], "comment_replyto": ["rkxcy7zdoS", "Bkeuqk7NtS", "BygvfpMCFS", "r1gP-GPSqr"], "comment_url": ["https://openreview.net/forum?id=B1eXvyHKwS&noteId=B1lw6TI3iS", "https://openreview.net/forum?id=B1eXvyHKwS&noteId=rkgmGXzusS", "https://openreview.net/forum?id=B1eXvyHKwS&noteId=rkxcy7zdoS", "https://openreview.net/forum?id=B1eXvyHKwS&noteId=Syevazz_or"], "meta_review_cdate": 1576798731844, "meta_review_tcdate": 1576798731844, "meta_review_tmdate": 1576800904595, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "This paper studies adversarial training in the linear classification setting, and shows a rate of convergence for adversarial training of o(1/log T) to the hard margin SVM solution under a set of assumptions. \n\nWhile 2 reviewers agree that the problem and the central result is somewhat interesting (though R3 is uncertain of the applicability to deep learning, I agree that useful insights can often be gleaned from studying the linear case), reviewers were critical of the degree of clarity and rigour in the writing, including notation, symbol reuse, repetitions/redundancies, and clarity surrounding the assumptions made.\n\nNo updates to the paper were made and reviewers did not feel their concerns were addressed by the rebuttals. I therefore recommend rejection, but would encourage the authors to continue refining their paper in order to showcase their results more clearly and didactically.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1eXvyHKwS&noteId=N9d8kA54k"], "decision": "Reject"}