{"forum": "Bkfw77FLIS", "submission_url": "https://openreview.net/forum?id=Bkfw77FLIS", "submission_content": {"authors": ["Hannes Rapp", "Martin Paul Nawrot", "Merav Stern"], "abstract": "Making an informed, correct and quick decision can be life-saving. It's crucial for animals during an escape behaviour or for autonomous cars during driving. The decision can be complex and may involve an assessment of the amount of threats present and the nature of each threat. Thus, we should expect early sensory processing to supply classification information fast and accurately, even before relying the information to higher brain areas or more complex system components downstream. Today, advanced convolution artificial neural networks can successfully solve such tasks and are commonly used to build complex decision making systems. However, in order to achieve excellent performance on these tasks they require increasingly complex, \"very deep\" model structure, which is costly in inference run-time, energy consumption and number of training samples, only trainable on cloud-computing clusters.\nA single spiking neuron has been shown to be able to solve many of these required tasks for homogeneous Poisson input statistics, a commonly used model for spiking activity in the neocortex; when modeled as leaky integrate and fire with gradient decent learning algorithm it was shown to posses a wide variety of complex computational capabilities. Here we refine its learning algorithm. The refined gradient-based local learning rule allows for better and stable generalization. We take advantage of this improvement to solve a problem of multiple instance learning (MIL) with counting where labels are only available for collections of concepts. We use an MNIST task to show that the neuron indeed exploits the improvements and performs on par with conventional ConvNet architecture with similar parameter space size and number of training epochs.", "pdf": "/pdf/5a5a208013ca3fadc74d4039c52f9580aa1ec4a9.pdf", "keywords": ["spiking neural networks", "neual plasticity", "pattern recognition", "single neuron", "classification"], "title": "Pattern recognition of labeled concepts by a single spiking neuron model.", "authorids": ["hannes.rapp@smail.uni-koeln.de", "martin.nawrot@uni-koeln.de", "merav.stern@mail.huji.ac.il"], "paperhash": "rapp|pattern_recognition_of_labeled_concepts_by_a_single_spiking_neuron_model"}, "submission_cdate": 1568211743184, "submission_tcdate": 1568211743184, "submission_tmdate": 1572339746866, "submission_ddate": null, "review_id": ["rye7CmE4Pr", "HyePCJh9vr"], "review_url": ["https://openreview.net/forum?id=Bkfw77FLIS¬eId=rye7CmE4Pr", "https://openreview.net/forum?id=Bkfw77FLIS¬eId=HyePCJh9vr"], "review_cdate": [1569108939496, 1569533902756], "review_tcdate": [1569108939496, 1569533902756], "review_tmdate": [1570047564951, 1570047544500], "review_readers": [["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper6/AnonReviewer2"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper6/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["Bkfw77FLIS", "Bkfw77FLIS"], "review_content": [{"evaluation": "4: Very good", "intersection": "4: High", "importance_comment": "The manuscript addresses whether artificial neural networks that better incorporate the discrete nature of biological neuron spiking dynamics would have different \"scaling\" in the number of model parameters versus complexity of problem that can be solved. This both addresses key challenges of AI network scaling and touches on what aspects of biological neural processing are critical to the brain's computational efficiency. ", "clarity": "4: Well-written", "technical_rigor": "3: Convincing", "intersection_comment": "While the work itself primarily focuses on AI-motivated questions, the work addresses questions relevant to the interface between both fields. ", "rigor_comment": "The results could be improved by presenting performance comparison metrics evaluated across multiple train-test sets, rather than for a single set. Some sense of distribution of performance and statistical significance of differences between models would greatly improve the findings. \nThe rigor (and clarity for broader audiences) would be improved by incorporating more detail on algorithms used (brief summaries and key equations) rather than purely pointing to citations. ", "comment": "A solid paper that tests the hypothesis that the discrete nature of neuron computations may provide computational advantages by allowing artificial networks to solve complex problems with fewer parameters. ", "importance": "4: Very important", "title": "Interesting, could be improved with adding detail", "category": "AI->Neuro", "clarity_comment": "See comments in Rigor about providing more detail on algorithms. "}, {"title": "Very promising results", "importance": "4: Very important", "importance_comment": "Taken at face value this is a very impressive set of results. As the authors make the case in the introduction, readout and training costs for deep networks can be expensive in terms of hardware, energy and time. This much reduced model builds on previous work quite substantially, perhaps not in concept but definitely in performance as evidenced by the large decrease in error rates between different MST implementations. Very promising.", "rigor_comment": "The work appears principled, and since the method builds on a well-known previous study it can be assumed that the method is reasonably robust. However (and this is an obvious criticism) the method was only applied to one particular and not-too-common task. How would it peform on other tasks? Are there task domains where MST would be expected to fail? And are there tasks where MST could excel even further? None of this is explored or even speculated on in the paper, leaving me unsure how robust the results are.", "clarity_comment": "The paper is extremely well-written, insight is offered in almost every sentence, caveats and possible criticisms are frequently pre-empted.", "clarity": "4: Well-written", "evaluation": "4: Very good", "intersection_comment": "This is a true biologically-inspired machine learning method. It is based on fundamental biological neuron properties (spiking dynamics, synaptic inputs, temporal data), but trained on a standard supervised machine learning task using gradient descent methods.", "intersection": "5: Outstanding", "comment": "As mentioned above, the conceptual advance was a bit incremental over previous work but the results are very impressive and exciting. However I would like to see more exploration of other tasks to see where this method would work and where it wouldn't.", "technical_rigor": "3: Convincing", "category": "Neuro->AI"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"}