AMSR / conferences_raw /neuroai19 /neuroai19_SkxJ4QKIIS.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
12.8 kB
{"forum": "SkxJ4QKIIS", "submission_url": "https://openreview.net/forum?id=SkxJ4QKIIS", "submission_content": {"TL;DR": "We present eligibility propagation an alternative to BPTT that is compatible with experimental data on synaptic plasticity and competes with BPTT on machine learning benchmarks.", "keywords": ["neuroscience", "plausible learning rules", "spiking neurons", "BPTT", "recurrent neural networks", "LSTM", "RNN", "computational neuroscience", "backpropagation through time", "online learning", "real-time recurrent learning", "RTRL", "eligibility traces"], "pdf": "/pdf/58cd9e53436fcad3c17d6b8f932eee660c602792.pdf", "authors": ["Guillaume Bellec", "Franz Scherr", "Elias Hajek", "Darjan Salaj", "Anand Subramoney", "Robert Legenstein", "Wolfgang Maass"], "title": "Eligibility traces provide a data-inspired alternative to backpropagation through time", "abstract": "Learning in recurrent neural networks (RNNs) is most often implemented by gradient descent using backpropagation through time (BPTT), but BPTT does not model accurately how the brain learns. Instead, many experimental results on synaptic plasticity can be summarized as three-factor learning rules involving eligibility traces of the local neural activity and a third factor. We present here eligibility propagation (e-prop), a new factorization of the loss gradients in RNNs that fits the framework of three factor learning rules when derived for biophysical spiking neuron models. When tested on the TIMIT speech recognition benchmark, it is competitive with BPTT both for training artificial LSTM networks and spiking RNNs. Further analysis suggests that the diversity of learning signals and the consideration of slow internal neural dynamics are decisive to the learning efficiency of e-prop.", "authorids": ["bellec@igi.tugraz.at", "scherr@igi.tugraz.at", "e.hajek@student.tugraz.at", "salaj@igi.tugraz.at", "subramoney@igi.tugraz.at", "legenstein@igi.tugraz.at", "maass@igi.tugraz.at"], "paperhash": "bellec|eligibility_traces_provide_a_datainspired_alternative_to_backpropagation_through_time"}, "submission_cdate": 1568211750574, "submission_tcdate": 1568211750574, "submission_tmdate": 1574678799182, "submission_ddate": null, "review_id": ["r1gLAnc8wr", "rkx_mAqfwH", "HJg2Es-owS"], "review_url": ["https://openreview.net/forum?id=SkxJ4QKIIS&noteId=r1gLAnc8wr", "https://openreview.net/forum?id=SkxJ4QKIIS&noteId=rkx_mAqfwH", "https://openreview.net/forum?id=SkxJ4QKIIS&noteId=HJg2Es-owS"], "review_cdate": [1569266893808, 1569005087707, 1569557300458], "review_tcdate": [1569266893808, 1569005087707, 1569557300458], "review_tmdate": [1570047562069, 1570047546911, 1570047537408], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper24/AnonReviewer1"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper24/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper24/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["SkxJ4QKIIS", "SkxJ4QKIIS", "SkxJ4QKIIS"], "review_content": [{"evaluation": "4: Very good", "intersection": "5: Outstanding", "importance_comment": "The authors consider how biologically motivated synaptic eligibility traces can be used for backpropagation-like learning, in particular by approximating local gradient computations in recurrent neural networks. This sheds new light on how artificial network algorithms might be implementable by the brain. ", "clarity": "4: Well-written", "technical_rigor": "4: Very convincing", "intersection_comment": "The authors directly tried to associate biological learning rules with deep network learning rules in AI.\n", "rigor_comment": "Space is of course limited, but the mathematics presented seem to pass all sanity checks and gives sufficiently rigor to the authors' approach. It would have been nice to present a figure showing how e-prop yields eligibility traces resembling STDP, as this is one of the key connections of this work to biology.", "comment": "Gives important new results about how eligibility traces can be used to approximate gradients when adequately combined with a learning signal. While eligibility traces have received some attention in neuroscience their relevance to learning has not been thoroughly explored, so this paper makes a welcome contribution that fits well within the workshop goals.\n\nOne part that would have been nice to clarify is the relative role of random feedback vs eligibility traces in successful network performance. It also would have been nice to comment on the relationship of this work to unsupervised (e.g. Hebbian-based) learning rules.\n\nA final addition that would have made this work more compelling would have been to more thoroughly explore e-prop for computations that unfold on timescales beyond those built-in to the neurons (e.g. membrane or adaptation timescales) and which instead rely on reverberating network activity.", "importance": "4: Very important", "title": "An exciting step toward biological solutions to temporal credit assignment and gradient computation in recurrent network training", "category": "Common question to both AI & Neuro", "clarity_comment": "Given its technical details it was reasonably straightforward to follow.\n"}, {"title": "A clear and principled solution to an important problem. Missing a more transparent exposition of its limitations and connection to previous literature.", "importance": "5: Astounding importance", "importance_comment": "Understanding how synaptic plasticity allows recurrent neural circuits to produce functional patterns of activity is a critical question in neuroscience. This paper directly addresses this question by deriving a synaptic plasticity rule that does exactly this, as well as contextualizing it within a number of related experimental findings.", "rigor_comment": "Due to the space constraints of a 4-page paper, not many mathematical details are provided for the derivation of the proposed algorithm. However, the exposition of the algorithm is clear and principled and the simulations are convincing. One piece that is missing from the results is the limitations of the e-prop algorithm relative to BPTT, given the approximations made in its derivation.", "clarity_comment": "Mostly well-written and clear.", "clarity": "4: Well-written", "evaluation": "4: Very good", "intersection_comment": "This paper derives a biologically plausible plasticity rule approximating the backpropagation-through-time (BPTT) algorithm from the artificial intelligence literature, explicitly linking artifical intelligence learning algorithms to biological ones.", "intersection": "5: Outstanding", "comment": "The work presented in this paper is highly relevant to this workshop and a valuable contribution to the field of synaptic plasticity and learning in recurrent networks. There is little question in the mind of this reviewer that this paper merits a high score. That said, in the opinion of this reviewer two important pieces are missing in this paper. \n\nFirstly, the discussion of how the proposed algorithm relates to previous proposals is very limited. In particular, making the explicit connection to real-time recurrent learning (RTRL) is warranted, as these two algorithms are very similar in spirit. Additionally, it seems that e-prop is very similar to the particular RTRL approximation proposed in reference 8. This link also merits further discussion.\n\nSecondly, an interesting question is how the approximations made in e-prop affect its performance. For example, asymptotic performance seems not to be affected (figure D), but learning speed is (figure C). Why is this? And are these limitations inherent to any biologically plausible (i.e. local) approximation to BPTT?\n\nThese may have reasonably been omitted due to space constraints, but it would be ideal if they were explored and discussed in the future presentation of this work.", "technical_rigor": "4: Very convincing", "category": "AI->Neuro"}, {"title": "biologically plausible training of spiking recurrent networks based on slow threshold adaptation", "importance": "5: Astounding importance", "importance_comment": "This work addresses how temporal credit assignment can be solved in spiking recurrent networks. Based on approximate gradients of a loss in recurrent spiking networks with threshold adaptation, a biologically plausible local learning rule is derived that involves an eligibility trace, pre- and postsynaptic activity. The results seem unparalleled both in terms of performance and biological plausibility and open a promising avenue to implement (reinforcement) learning in spiking neural networks.", "rigor_comment": "The derivation of a local and biologically plausible learning rule is only partially understandable (because of the limitations of the 4-page format), but the general concept is clear. It is, however not clear how different simplifying assumption in the approximation of the gradients are justified and why they have only a minor effect on the final performance. Moreover, the robustness of the results with respect to details of the parameters is not apparent. Is the excellent performance only observed in a small parameter regime that requires fine-tuning, or is it a general feature? When does it break down and why? Is the assumption of a fully connected network crucial, or would this also work on sparse networks? ", "clarity_comment": "The problem is stated clearly, the methods are explained well. Because of the limited space, the derivation is only conceptually understandable not in every mathematical step, but the reviewer can't blame the authors for that. The results are clear and understandable.\n", "clarity": "4: Well-written", "evaluation": "5: Excellent", "intersection_comment": "The problem of credit assignment in recurrent networks is relevant both for machine learning and for neuroscience. While superficially, this works seems mostly to be a biologically plausible implementation of gradient-based learning in recurrent spiking networks, it might also provide inspiration for the machine learning community to think beyond discrete-time firing RNN. Currently spiking networks are barely used in machine learning despite their advantages (e.g. lower energy need), because it seems difficult to train them to do something useful, hopefully, this paper might be a step towards changing this.", "intersection": "5: Outstanding", "comment": "This work is very suitable for the workshop and seems relevant both to machine learning and neuroscience. \nNevertheless, here are a couple of ideas for improvement: \n* Relating this work to previous attempts to train spiking neurons would be important. (e.g. \nD. Thalmeier, M. Uhlmann, B. Kappen, and R.-M. Memmesheimer 2016, DePasquale, B., Churchland, M.M. & Abbott, L.F. 2016, R. Guetig 2016, Kim, Chow 2018, A. Ingrosso, L.F. Abbott 2018)\n* How does the network capacity scale with network size? \n* Is the low irregularity (coefficient of variation of inter-spike intervals after training seems very small) a feature of the learning algorithm? If yes, how can this get more realistic irregular?\n* What is the dynamic state of networks after training? Is there a cancellation of external inputs by net inhibitory recurrent interaction, like in a balanced state? How do pairwise correlations change during training and are they biologically plausible?\n* Are spikes in this framework necessary for computation, or are they just a biologically plausible feature that doesn't harm too much? If spikes are not required, could this be mapped to a rate-based analogous network e.g. with BCM-like plasticity, where analytical results might be easier to achieve?\n* What are the core mechanisms of this learning algorithm and how could they be understood in more detail?\n* How could this be used to implement reinforcement learning, regression, classification, time-series prediction?\n* (How) Can E-prop be characterized analytically in a simplified form/on a toy problem?\n* Which experimentally testable predictions arising from this work?", "technical_rigor": "4: Very convincing", "category": "Common question to both AI & Neuro"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Oral)"}