AMSR / conferences_raw /neuroai19 /neuroai19_ByxfNXF8Ir.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame
No virus
7.88 kB
{"forum": "ByxfNXF8Ir", "submission_url": "https://openreview.net/forum?id=ByxfNXF8Ir", "submission_content": {"TL;DR": "Perturbations can be used to learn feedback weights on large fully connected and convolutional networks.", "keywords": ["biologically plausible deep learning", "feedback alignment", "REINFORCE", "node perturbation"], "pdf": "/pdf/ea8503a9157a8fc388477eb4bdd4584133abcad6.pdf", "authors": ["Benjamin James Lansdell", "Prashanth Prakash", "Konrad Paul Kording"], "title": "Learning to solve the credit assignment problem", "abstract": "Backpropagation is driving today's artificial neural networks. However, despite extensive research, it remains unclear if the brain implements this algorithm. Among neuroscientists, reinforcement learning (RL) algorithms are often seen as a realistic alternative. However, the convergence rate of such learning scales poorly with the number of involved neurons. Here we propose a hybrid learning approach, in which each neuron uses an RL-type strategy to learn how to approximate the gradients that backpropagation would provide. We show that our approach learns to approximate the gradient, and can match the performance of gradient-based learning on fully connected and convolutional networks. Learning feedback weights provides a biologically plausible mechanism of achieving good performance, without the need for precise, pre-specified learning rules.", "authorids": ["ben.lansdell@gmail.com", "prprak@seas.upenn.edu", "koerding@gmail.com"], "paperhash": "lansdell|learning_to_solve_the_credit_assignment_problem"}, "submission_cdate": 1568211753907, "submission_tcdate": 1568211753907, "submission_tmdate": 1572280606518, "submission_ddate": null, "review_id": ["HJxmx5K8wr", "H1gOVWEsDB", "rJgjhUsiPB"], "review_url": ["https://openreview.net/forum?id=ByxfNXF8Ir&noteId=HJxmx5K8wr", "https://openreview.net/forum?id=ByxfNXF8Ir&noteId=H1gOVWEsDB", "https://openreview.net/forum?id=ByxfNXF8Ir&noteId=rJgjhUsiPB"], "review_cdate": [1569262058911, 1569567024464, 1569597107479], "review_tcdate": [1569262058911, 1569567024464, 1569597107479], "review_tmdate": [1570047562737, 1570047535188, 1570047532868], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper32/AnonReviewer2"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper32/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper32/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["ByxfNXF8Ir", "ByxfNXF8Ir", "ByxfNXF8Ir"], "review_content": [{"evaluation": "4: Very good", "intersection": "4: High", "importance_comment": "The current work presents an algorithm for neural network training using node perturbation that does not rely on weight transport and performs well on a number of difficult machine learning problems. These methods are essential for neuroscience and AI and will hopefully make solid testable predictions in the near future.", "clarity": "5: Impeccable", "technical_rigor": "4: Very convincing", "intersection_comment": "How real and artificial neural networks can learn without direct access to synaptic weight information from other neurons (\u201cweight transport\u201d) is an essential question in both neuroscience and AI.", "rigor_comment": "The results of node perturbation for MNIST, auto-encoding MNIST, and CIFAR are convincing. Authors show the average of multiple runs and over different noise levels. Where the method has drawbacks (noise requirements, baseline loss, separate feedforward and feedback learning), the authors have clearly pointed to ways these requirements are in line with biology, or could be removed in future work.", "comment": "The similarity between this work and Akrout et al. (2019) is definitely large. Would be curious to hear the author\u2019s thoughts on the potential advantages / disadvantages of their method in comparison.", "importance": "4: Very important", "title": "A step towards credit assignment without weight transport", "category": "Common question to both AI & Neuro", "clarity_comment": "The method and benchmarks being performed are described clearly and with reference to the relevant literature."}, {"title": "review", "importance": "2: Marginally important", "importance_comment": "Understanding how learning occurs in the brain is extremely important. Understanding how the brain could implement backprop is also important. This approach seems technically correct, but inefficient with potential scaling issues -- it seems unlikely it will change how readers think about learning in the brain.", "rigor_comment": "I believe all claims are correct.\n", "clarity_comment": "This was clearly written, but seemed unnecessarily complex.\n", "clarity": "3: Average readability", "evaluation": "3: Good", "intersection_comment": "This work focuses on porting the idea of backprop from AI to neuro.\n", "intersection": "4: High", "comment": "I don't understand the need for the noise perturbations. This work proposes updating the backwards weights with (B^T e - lambda) e^T, and states that doing so will cause them to converge towards the transpose of the forward weights. Wouldn't it be simpler, and require a less complex circuit, simply to update the backwards weights with h^{i-1} e^T? (as is proposed in [Kolen and Pollack, 1994]). In this case, foward and reverse weights will also converge towards each other. It seems like doing this by injecting noise instead of just using the forward activations requires both a more complex, and noisier, circuit.\n\nAlso, if every unit is simultaneously injecting noise, it's not obvious to me that this will scale better with number of units than RL -- I suspect the scaling will be exactly the same, since noise contributions from different units will interfere with each other.\n\n(should cite evolutionary strategies for your functional form for lambda)\n", "technical_rigor": "4: Very convincing", "category": "AI->Neuro"}, {"title": "Rich experimental analysis supports an improvement over relying on feedback alignment", "importance": "4: Very important", "importance_comment": "Learning effective backward-pass weights is an important step towards biologically plausible learning of difficult tasks in ML.", "rigor_comment": "The experiments and visualizations are rich and convincing. A learning-based approach to credit assignment seems to be clearly better than relying on feedback alignment. I agree with reviewer 3 that a discussion of training signal variance scaling would be helpful, and I agree with reviewer 2 that comparisons to more related approaches would be interesting.", "clarity_comment": "I understood the methods section up until \"we will use the noisy response to estimate gradients\", which is why I don't have a good sense of how this approach will scale (see reviewer 3's comment about simultaneous noise injection). Other than this, the paper is interesting, well written, and well organized.", "clarity": "2: Can get the general idea", "evaluation": "4: Very good", "intersection_comment": "Learning without weight transport is of interest to members of both communities.", "intersection": "5: Outstanding", "technical_rigor": "3: Convincing", "category": "Common question to both AI & Neuro"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"}