{"forum": "HkgSXQtIIB", "submission_url": "https://openreview.net/forum?id=HkgSXQtIIB", "submission_content": {"title": "On the Adversarial Robustness of Neural Networks without Weight Transport", "Diversity": [], "gender": [], "keywords": ["Neural networks without weight transport", "gradient-based adversarial attacks"], "TL;DR": "Less biologically implausible deep neural networks trained without weight transport can be harder to fool.", "abstract": "Neural networks trained with backpropagation, the standard algorithm of deep learning which uses weight transport, are easily fooled by existing gradient-based adversarial attacks. This class of attacks are based on certain small perturbations of the inputs to make networks misclassify them. We show that less biologically implausible deep neural networks trained with feedback alignment, which do not use weight transport, can be harder to fool, providing actual robustness. Tested on MNIST, deep neural networks trained without weight transport (1) have an adversarial accuracy of 98% compared to 0.03% for neural networks trained with backpropagation and (2) generate non-transferable adversarial examples. However, this gap decreases on CIFAR-10 but is still significant particularly for small perturbation magnitude less than 1 \u2044 2.", "seniority": "Industry", "authorids": ["makrout@cs.toronto.edu"], "authors": ["Mohamed Akrout"], "pdf": "/pdf/d52ad80b6b4c1d540986b94510c77163134d5235.pdf", "paperhash": "akrout|on_the_adversarial_robustness_of_neural_networks_without_weight_transport"}, "submission_cdate": 1568211741023, "submission_tcdate": 1568211741023, "submission_tmdate": 1570116647761, "submission_ddate": null, "review_id": ["BJekVQW5vS", "ByejhxC9vH", "Hye8TyhovH"], "review_url": ["https://openreview.net/forum?id=HkgSXQtIIB¬eId=BJekVQW5vS", "https://openreview.net/forum?id=HkgSXQtIIB¬eId=ByejhxC9vH", "https://openreview.net/forum?id=HkgSXQtIIB¬eId=Hye8TyhovH"], "review_cdate": [1569489702726, 1569542322983, 1569599421635], "review_tcdate": [1569489702726, 1569542322983, 1569599421635], "review_tmdate": [1570047549328, 1570047540161, 1570047531983], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper1/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper1/AnonReviewer2"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper1/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["HkgSXQtIIB", "HkgSXQtIIB", "HkgSXQtIIB"], "review_content": [{"title": "Sound but premise is strange/unexplained", "importance": "2: Marginally important", "importance_comment": "Premise is that feedback alignment networks are also more robust to adversarial attacks. The authors show because the \"gradient\" in the feedback pathway is a rough approximation, it is hard to use this gradient to train an adversarial attack.\n\nThe basic premise is very strange. Adversarial attacks are artificial: attacker has access to gradient of the loss function. For FA networks, it's unclear why an attacker could not access true gradient, and be forced to use the approximate gradient.", "rigor_comment": "Overall the technical aspects of this paper seem sound. ", "clarity_comment": "No trouble understanding the material or writing", "clarity": "4: Well-written", "evaluation": "3: Good", "intersection_comment": "By focusing on the more biologically plausible \"feedback alignment\" networks, the paper does sit at the intersection of neuro and AI. However at present, adversarial attacks likely have much larger relevance to AI than neuro.", "intersection": "3: Medium", "comment": "The premise of the work must be clarified. As well as whether or how adversarial attacks (as framed) might have relevance to neuroscience.", "technical_rigor": "3: Convincing", "category": "Not applicable"}, {"title": "Interesting idea -- but need more robust testing", "importance": "3: Important", "importance_comment": "This work might open up a new class of neural network model learning framework that could go beyond simply solving adversarial attacks. ", "rigor_comment": "It is hard to judge the rigor with such little information. Overall, it seems pretty well managed. ", "clarity_comment": "The document has been well written. ", "clarity": "4: Well-written", "evaluation": "4: Very good", "intersection_comment": "The work is inspired by a critical difference in feedback connection as applicable to the brain and the models. The author are putting forward a very interesting proposition and it is worth discussing further. ", "intersection": "4: High", "technical_rigor": "3: Convincing", "category": "Neuro->AI"}, {"category": "Neuro->AI", "title": "Biologically-plausible learning without weight transport provides robustness to attacks", "importance": "4: Very important", "comment": "Overall comments: Well written paper that explores an interesting idea. The material presented is novel and relevant to the workshop. Experiments conducted do a good job of supporting the authors' claims. \n\nSeveral small typos:\nLine 7 \u2013 \u201cbut is still\u201d instead of \u201cbut still\u201d and \u201csmall perturbations of magnitude\u201d instead of \u201csmall perturbation magnitude\u201d\nLine 34 \u2013 Interchange the order of \u201cfa\u201d and \u201cbp\u201d \nLine 52 \u2013 Kurakin et al\nLine 53 \u2013 Replace \u201cchange\u201d with \u201cchanges\u201d\nReplace BMI with BIM wherever appropriate.", "evaluation": "4: Very good", "intersection": "4: High", "rigor_comment": "Results appear to be sound.", "clarity": "4: Well-written", "intersection_comment": "neural-inspired learning", "technical_rigor": "4: Very convincing", "clarity_comment": "Well written.", "importance_comment": "New strategies for learning that use more biologically-plausible learning rules are of extreme importance for the field."}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"}