{"forum": "HJlKNmFIUB", "submission_url": "https://openreview.net/forum?id=HJlKNmFIUB", "submission_content": {"TL;DR": "Metalearning unsupervised update rules for neural networks improves performance and potentially demonstrates how neurons in the brain learn without access to global labels.", "keywords": ["Hebbian learning", "deep learning optimization", "metalearning", "learning to learn"], "authors": ["Jeffrey Siedar Cheng", "Ari Benjamin", "Benjamin Lansdell", "Konrad Paul Kording"], "title": "Augmenting Supervised Learning by Meta-learning Unsupervised Local Rules", "abstract": "The brain performs unsupervised learning and (perhaps) simultaneous supervised learning. This raises the question as to whether a hybrid of supervised and unsupervised methods will produce better learning. Inspired by the rich space of Hebbian learning rules, we set out to directly learn the unsupervised learning rule on local information that best augments a supervised signal. We present the Hebbian-augmented training algorithm (HAT) for combining gradient-based learning with an unsupervised rule on pre-synpatic activity, post-synaptic activities, and current weights. We test HAT's effect on a simple problem (Fashion-MNIST) and find consistently higher performance than supervised learning alone. This finding provides empirical evidence that unsupervised learning on synaptic activities provides a strong signal that can be used to augment gradient-based methods.\n \n We further find that the meta-learned update rule is a time-varying function; thus, it is difficult to pinpoint an interpretable Hebbian update rule that aids in training. We do find that the meta-learner eventually degenerates into a non-Hebbian rule that preserves important weights so as not to disturb the learner's convergence.", "authorids": ["jeffch@seas.upenn.edu", "aarrii@seas.upenn.edu", "lansdell@seas.upenn.edu", "kording@seas.upenn.edu"], "pdf": "/pdf/f7cd10118acd21c3d7a68ad5f093e8f6ce3a6059.pdf", "paperhash": "cheng|augmenting_supervised_learning_by_metalearning_unsupervised_local_rules"}, "submission_cdate": 1568211760885, "submission_tcdate": 1568211760885, "submission_tmdate": 1572563758901, "submission_ddate": null, "review_id": ["B1xt1LUfwH", "r1xuw969Dr", "SJlpleNoDr"], "review_url": ["https://openreview.net/forum?id=HJlKNmFIUB¬eId=B1xt1LUfwH", "https://openreview.net/forum?id=HJlKNmFIUB¬eId=r1xuw969Dr", "https://openreview.net/forum?id=HJlKNmFIUB¬eId=SJlpleNoDr"], "review_cdate": [1568986593466, 1569540704281, 1569566708657], "review_tcdate": [1568986593466, 1569540704281, 1569566708657], "review_tmdate": [1570047567836, 1570047541073, 1570047535655], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper49/AnonReviewer2"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper49/AnonReviewer1"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper49/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["HJlKNmFIUB", "HJlKNmFIUB", "HJlKNmFIUB"], "review_content": [{"evaluation": "2: Poor", "intersection": "3: Medium", "importance_comment": "Meta-learning is surely crucial to how the brain works, and it is very interesting to investigate local learning rules via meta-learning. However, this paper does not approach the issue from a sensible stand point, and is very confused about the application of meta-learning (more on this below), so it does not make a very important contribution in my opinion.", "clarity": "2: Can get the general idea", "technical_rigor": "2: Marginally convincing", "intersection_comment": "The actual use of neuroscience here is limited, and is almost wholly born out of a misunderstanding about the nature of learning in the brain, e.g. that the learning rules are local and could not possibly follow cost function gradients. Note, for example, that there is growing evidence of non-Hebbian plasticity in the brain, see e.g.: https://science.sciencemag.org/content/357/6355/1033.abstract\n\nMoreover, the impact on AI would be limited, as this paper does not provide any advances in the field of meta-learning for ML.", "rigor_comment": "The technical approach in this paper is problematic. A few major issues:\n\n1. The claim that local signals in the brain do not carry global loss function information is pure speculation, and not well founded. For example, equilibrium propagation (https://www.frontiersin.org/articles/10.3389/fncom.2017.00024/full) uses local learning rules and does follow a global loss gradient. Furthermore, gradient descent != supervised learning. So, right off the bat some of the motivations for the paper are unjustified.\n\n2. Generally, in meta-learning, there are two learning loops, an inner loop where the learner's parameters are updated, and an outer loop, where the meta-learner's parameters are updated. Importantly, meta-learning typically involves multiple different task variants in the outer loop, because the point is that the system must learn how to learn across tasks. However, in this paper, there is no outer loop where multiple tasks are used to learn how to learn. As such, the meta-learner does not learn how to learn broadly. Rather, the meta-learner is only tasked with figuring out weight updates for a specific task, and with the constraint of a local update rule. This is not so much meta-learning as optimisation of a local learning rule. This would be why, I suspect, no data efficiency improvements occur. The entire approach is muddled.\n\n3. The performance is not very impressive. It is possible to achieve much better results on fashion-MNIST than the authors report using backprop. Given that no data efficiency is achieved either, I don't actually see any real technical contribution here.", "comment": "This submission has some neat ideas in its kernel, but the authors are confused about both learning in the brain and meta-learning more broadly. For examples of more clear-headed papers that are thinking in a similar direction of this submission, see:\n\nhttp://papers.nips.cc/paper/7359-long-short-term-memory-and-learning-to-learn-in-networks-of-spiking-neurons\n\nhttps://openreview.net/forum?id=r1lrAiA5Ym", "importance": "2: Marginally important", "title": "An interesting idea, but very confused in its approach", "category": "Common question to both AI & Neuro", "clarity_comment": "The clarity is very poor. Example: the authors refer to parameters phi_L and phi_M in the text, but these are not mentioned in the algorithm. The paper is littered with such stray concepts, etc. Moreover, the basic premises are poorly stated in my opinion."}, {"title": "Interesting idea; methods are confusing and difficult to make sense of, results are not yet fully convincing.", "importance": "3: Important", "importance_comment": "The motivation behind the work is important, however, the methods and results are not yet convincing and need substantially more intuition and analyses to convince that its results are significant.\n", "rigor_comment": "The neuroscience motivation is based on some false assumptions. While the brain may not be trained exactly like backprop, it\u2019s still an open debate about how much global error signal plays a role in learning, especially given the rich literature in predictive coding and error feedback for visuomotor tasks. In general, prediction plays a key role in learning, and this includes predicting distributions of the world.\n\nSome of the results require more experiments to be convincing. The experiment carried out is for one trial, it\u2019d be good to get a sense of reproducibility. The difference is fairly indistinguishable in figure 1, right panel. Also, control not described in sufficient detail.", "clarity_comment": "While the motivation and connections are fleshed out, the methods are confusing details are missing. The result section also is difficult to interpret in the context of the motivation. In general, more intuition is needed to justify these choices. In addition, the results needs more experiments to probe at a potential explanation. For instance, is it possible that the meta-learner is acting as a normalizer, keeping the neural activities in each layer within some bounds?", "clarity": "4: Well-written", "evaluation": "2: Poor", "intersection_comment": "Strong. Tackles the question of integrating biologically plausible learning rules along backprop to train neural networks.", "intersection": "4: High", "comment": "Some more analyses would be helpful and greater clarity in discussing the models, mainly the meta-learner.", "technical_rigor": "3: Convincing", "category": "Neuro->AI"}, {"title": "review", "importance": "4: Very important", "importance_comment": "I believe the approach of meta-learning biologically plausible learning rules is extremely promising.\n", "rigor_comment": "I greatly appreciated the clear discussion of the unexpected behavior of the learning rule, and counterintuitive mechanism by which it may be acting.", "clarity_comment": "It was difficult to understand the details of the algorithm, though I believe this is largely due to the length constraints.", "clarity": "3: Average readability", "evaluation": "4: Very good", "intersection_comment": "This is taking meta-learning techniques from machine learning, and applying them to biological learning.\n", "intersection": "4: High", "comment": "I believe the approach and preliminary results are promising.\n", "technical_rigor": "4: Very convincing", "category": "AI->Neuro"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"}