{"forum": "B1g0QmtIIS", "submission_url": "https://openreview.net/forum?id=B1g0QmtIIS", "submission_content": {"TL;DR": "We show that a working memory input to a reservoir network makes a local reward-modulated Hebbian rule perform as well as recursive least-squares (aka FORCE)", "keywords": ["reservoir networks", "recurrent neural networks", "local rules", "Hebbian rules", "continuous attractors"], "authors": ["Roman Pogodin", "Dane Corneil", "Alexander Seeholzer", "Joseph Heng", "Wulfram Gerstner"], "title": "Working memory facilitates reward-modulated Hebbian learning in recurrent neural networks", "abstract": "Reservoir computing is a powerful tool to explain how the brain learns temporal sequences, such as movements, but existing learning schemes are either biologically implausible or too inefficient to explain animal performance. We show that a network can learn complicated sequences with a reward-modulated Hebbian learning rule if the network of reservoir neurons is combined with a second network that serves as a dynamic working memory and provides a spatio-temporal backbone signal to the reservoir. In combination with the working memory, reward-modulated Hebbian learning of the readout neurons performs as well as FORCE learning, but with the advantage of a biologically plausible interpretation of both the learning rule and the learning paradigm.", "authorids": ["roman.pogodin.17@ucl.ac.uk", "dane@corneil.ca", "seeholzer@gmail.com", "joseph.a.heng@gmail.com", "wulfram.gerstner@epfl.ch"], "pdf": "/pdf/3b48a3c3f3d6a46fcb3c035ad06725fad57941f4.pdf", "paperhash": "pogodin|working_memory_facilitates_rewardmodulated_hebbian_learning_in_recurrent_neural_networks"}, "submission_cdate": 1568211749730, "submission_tcdate": 1568211749730, "submission_tmdate": 1571837701010, "submission_ddate": null, "review_id": ["H1lLjr6qvS", "BkeI3x4owH"], "review_url": ["https://openreview.net/forum?id=B1g0QmtIIS¬eId=H1lLjr6qvS", "https://openreview.net/forum?id=B1g0QmtIIS¬eId=BkeI3x4owH"], "review_cdate": [1569539485550, 1569566894448], "review_tcdate": [1569539485550, 1569566894448], "review_tmdate": [1570047541725, 1570047535446], "review_readers": [["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper22/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper22/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1g0QmtIIS", "B1g0QmtIIS"], "review_content": [{"title": "Interesting study with nice result showcasing biologically plausible learning rule for temporal sequences", "importance": "3: Important", "importance_comment": "The ideas and results presented by the authors are novel and impressive. Authors combined two biologically plausible concepts to construct their learning model: reward modulated Hebbian learning rule and working memory. It is an impressive fit that the model manages to achieve near-optimal performance as FORCE and also computationally cheap. ", "rigor_comment": "While the results from the proposed model are impressive, to me I wish there were more in depth investigations on how bump attractor network is influencing the reservoir dynamics, and thus the convergence to the target signal with hebbian learning rule. \n\nFor instance, the authors states in Introduction \"We propose stabilizing the reservoir activity by combining it with a continuous attractor\" and \"...feeding an abstract oscillatory input or a temporal backbone signal to the reservoir in order to overcome structural instabilities of the FORCE\", but I am unable to find satisfactory explanations for these statements in the paper. According to the original paper by Sussillo and Abbot, initial chaotic state of the reservoir actually improved the training performance for particular choice of parameter 'g.' Does chaotic nature of the reservoir only inhibit model's learning capability? Could there be some choices of chaos parameter that actually helps learning? It would be interesting to look into these questions to understand better about the role of chaos, and inputs from attractor for successful learning. ", "clarity_comment": "The paper has few typos but is overall well written and easy to follow ", "clarity": "4: Well-written", "evaluation": "4: Very good", "intersection_comment": "The subject of network learning rule for complex temporal sequences is an important topic for both AI and neuroscience communities. The concepts discussed in the paper are good examples of biologically plausible interpretation of such learning rules, which appeals to both communities. ", "intersection": "4: High", "comment": "As mentioned in technical rigor section, more rigorous investigation of the role of attractor for successful learning rule would be great addition to the paper. ", "technical_rigor": "3: Convincing", "category": "Common question to both AI & Neuro"}, {"title": "latent bump attractor network improves performance of RNN with local learning rule", "importance": "3: Important", "importance_comment": "This work extends earlier work on reward-modulated Hebbian plasticity in RNN by a latent bump attractor network, which helps the network to bridge long timespans. The work provides no in-depth analysis of the mechanisms underlying the improved performance. Overall, it seems a small improvement compared to previous work. The author(s) provide code, which makes the paper completely reproducible. ", "rigor_comment": "Generally, the results seem plausible and sound. However, the putative mechanism behind the improved performance (slow dynamics of bump attractor bridging the timescale from short Hebbian plasticity to long timespans of the task), is only hypothesized but not actually studied. Also, a mechanistic understanding of how chaos is being suppressed during training is missing. The paper compares the novel learning algorithm on one single toy example to is predecessor, so it is difficult to see how its performance compares with alternative approaches. A huge bonus is that the author(s) provide code, which makes the paper completely reproducible. ", "clarity_comment": "The problem is clearly explained. The details of the implementation are not described (time-step, adaptation parameter, gain parameter lambda (often called g for RNN)) but available in the accompanying code. The results are states clearly.", "clarity": "3: Average readability", "evaluation": "3: Good", "intersection_comment": "Learning long-term dependencies is challenging in RNN both in machine learning and in neuroscience because it requires bridging the time-scale from single neuron interactions (milliseconds) to the duration of tasks (seconds). While in the AI field, this is these days usually being addressed by gated units, the solution proposed here aims to achieve this with biologically plausible local learning rules in combination with a latent bump attractor. The proposed solution seems to my knowledge to be novel in the reservoir computing community, it has probably only limited relevance to the AI community (because they would just use gated units) and but seems moderately relevant for the neuroscience community.", "intersection": "3: Medium", "comment": "The overall contribution of the paper is significant (in the sense of noticeable), but rather incremental.\n* The biological plausibility of working memory implemented by bump attractors generated by 2500 firing rate unit (which usually represent large populations of spiking neurons) is questionable. \n* It is not clear how \"u\" can be interpreted as a membrane potential when the entire network operates on the level of firing rates.\n* It is not clear how delays impede the suppression of chaos\n* It is not clear how this network would perform on other typical reservoir-computing tasks, e.g. the Romo task and how the performance improvements relate to \"hints\" given to the network during training (e.g. in full-FORCE).\n* The performance should be compared to other learning algorithms for training RNN, especially those striving for biological plausibility, e.g. feedback-alignment, Local online learning in recurrent networks with random feedback (RFLO) Murray 2019) and Eprop (Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Robert Legenstein, Wolfgang Maass, 2019).\n* Despite some shortcomings in the depth of the analysis, the paper altogether \"good\", especially the publication of the accompanying code is exemplary and enhances both understandability and reproducibility.", "technical_rigor": "3: Convincing", "category": "Common question to both AI & Neuro"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"}