{"forum": "HklfNQFL8H", "submission_url": "https://openreview.net/forum?id=HklfNQFL8H", "submission_content": {"TL;DR": "Networks that learn with feedback connections and local plasticity rules can be optimized for using meta learning.", "keywords": ["biologically plausible learning", "meta learning"], "pdf": "/pdf/4b1395dc5879a1a37a70cc0d6c35053491065ea5.pdf", "authors": ["Jack Lindsey"], "title": "Learning to Learn with Feedback and Local Plasticity", "abstract": "Developing effective biologically plausible learning rules for deep neural networks is important for advancing connections between deep learning and neuroscience. To date, local synaptic learning rules like those employed by the brain have failed to match the performance of backpropagation in deep networks. In this work, we employ meta-learning to discover networks that learn using feedback connections and local, biologically motivated learning rules. Importantly, the feedback connections are not tied to the feedforward weights, avoiding any biologically implausible weight transport. It can be shown mathematically that this approach has sufficient expressivity to approximate any online learning algorithm. Our experiments show that the meta-trained networks effectively use feedback connections to perform online credit assignment in multi-layer architectures. Moreover, we demonstrate empirically that this model outperforms a state-of-the-art gradient-based meta-learning algorithm for continual learning on regression and classification benchmarks. This approach represents a step toward biologically plausible learning mechanisms that can not only match gradient descent-based learning, but also overcome its limitations.", "authorids": ["jackwlindsey@gmail.com"], "paperhash": "lindsey|learning_to_learn_with_feedback_and_local_plasticity"}, "submission_cdate": 1568211754311, "submission_tcdate": 1568211754311, "submission_tmdate": 1572541800298, "submission_ddate": null, "review_id": ["ryxYLq9fDr", "HygrDolYvr", "ryeGyzVtwH"], "review_url": ["https://openreview.net/forum?id=HklfNQFL8H¬eId=ryxYLq9fDr", "https://openreview.net/forum?id=HklfNQFL8H¬eId=HygrDolYvr", "https://openreview.net/forum?id=HklfNQFL8H¬eId=ryeGyzVtwH"], "review_cdate": [1569004112718, 1569422173385, 1569436122032], "review_tcdate": [1569004112718, 1569422173385, 1569436122032], "review_tmdate": [1570047566195, 1570047555921, 1570047554121], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper33/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper33/AnonReviewer1"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper33/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["HklfNQFL8H", "HklfNQFL8H", "HklfNQFL8H"], "review_content": [{"evaluation": "4: Very good", "intersection": "5: Outstanding", "importance_comment": "This submission presents a meta-learning approach to discovering local updates guided by feedback. The goal is to move towards more biologically plausible learning mechanisms. This is an important topic for linking neuroscience and AI, and the approach the authors take here is interesting/promising.", "clarity": "4: Well-written", "technical_rigor": "3: Convincing", "intersection_comment": "Definitely at the intersection of neuroscience and AI!", "rigor_comment": "There was little in the way of technical details. Partly, that was a matter of space, but it was also partly a matter of choice (e.g. the description of the experiments could have been shortened to make a bit more room for math). Also, the proof only demonstrates the expressivity of the approach. One concern I would have is the question of training efficiency - is it more efficient than standard meta-learning techniques and are there ways to make it more efficient by loosening some of the constraints on feedback and plasticity rules? Regardless, it is hard to fully assess the technical rigour, but the basic concept seems sound and the experiments are reasonably convincing. The finding regarding the importance of feedback is particularly illuminating in my opinion.", "comment": "Overall, a great submission. I have many questions (e.g., what does the learned feedback look like? What do the learned update rules look like? Why force local learning rules based on old findings in neuroscience? We are beginning to realize that it isn't all about Hebbian plasticity! See e.g.: https://science.sciencemag.org/content/357/6355/1033.abstract). But, the workshop is the perfect place to ask these questions. :)", "importance": "4: Very important", "title": "Interesting paper, would be keen to learn more", "category": "Common question to both AI & Neuro", "clarity_comment": "Not perfect, but overall very well-written."}, {"evaluation": "4: Very good", "intersection": "4: High", "importance_comment": "This is nice work that addresses the credit assignment problem with a meta-learning approach. The motivation needs to be a bit clearer. Is the work trying to address the credit assignment problem in general, or just when applied to online learning tasks? Either way this is important work, with many interesting future directions.", "clarity": "4: Well-written", "technical_rigor": "3: Convincing", "intersection_comment": "There are exiting directions in both AI and neuroscience this work could be take. \n\nSeeing if these meta-learnt rules line up with previously characterized biological learning rules is particularly interesting.", "rigor_comment": "The model and implementation make sense as far as I can tell from this brief submission.\n\nThe theoretical results stated are nice to have.\n\nSection 1 pitches the method as solving the credit assignment problem, citing problems with weight symmetry etc, that apply to many forms of learning. But the related work in Section 2 then goes on to talk about the efficiency of backprop for solving online learning and few-shot learning tasks. The efficiency of backprop should be mentioned in the intro if it is something this work is aiming to address. \n\nWhile much human learning may be more naturally cast as online learning, not all of it is. There may be much interest in how we learn from so few samples in certain settings, but we also learn some relationships/tasks in a classical associationist manner which is well modeled by 'slow' gradient-descent like learning (e.g. Rescorla Wagner). The credit assignment problem exists in these cases also. So I think the present work needs to be repitched slightly as solving credit assignment in an online/few shot learning setting. Or discuss how it can be extended to more general learning problems.", "comment": "Define the model more explicitly. And emphasize that this only solves credit assignment for certain types of learning problems (at the moment).", "importance": "4: Very important", "title": "Nice work. Needs to be clearer about whether it's trying to solve credit assignment for general learning problems, or just online learning", "category": "AI->Neuro", "clarity_comment": "The submission is pretty clear. \n\nIn understanding the model, it would be useful to more explicitly define the model. For instance, how is the b at line 63 related to the activation x_i and ReLU at lines 75 and 76?"}, {"evaluation": "5: Excellent", "intersection": "4: High", "importance_comment": "This paper presents some interesting results on meta-learning of weights in a more biologically plausible neural network. The results are fairly important, as they suggest that a proper initialization may be a key aspect of the success of biologically plausible learning rules.", "clarity": "4: Well-written", "technical_rigor": "4: Very convincing", "intersection_comment": "The paper touches on concepts in both neuroscience and machine learning, however, the paper ultimately seems more geared toward a machine learning audience. For instance, while the authors briefly speculate about alternative ways in which meta-learning could be implemented, they do not provide an in-depth discussion on its biological plausibility.", "rigor_comment": "Overall, the authors are rigorous in their evaluation. For the final draft, the authors should include their additional analyses on the feedback weights.", "comment": "This paper presents an interesting approach to improving biologically plausible learning in deep networks. A few aspects of the paper could be clarified, e.g. the baseline methods. Diagrams would also be helpful in clarifying the feedforward vs. feedback mechanisms. Again, I would want to see the additional analyses included in the final draft. This paper would be a useful addition to the workshop.", "importance": "5: Astounding importance", "title": "Meta-learning biologically plausible networks", "category": "Common question to both AI & Neuro", "clarity_comment": "Much of the paper was clear in its description. One point of confusion is the distinction between gradient-based learning and gradient-based meta-learning. The authors claim that they compare with gradient-based meta-learning, however, their method also uses gradients to perform meta-learning. Clarifying these details/wording would help to clear up the confusion."}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Oral)"}