review_id,review,rating,decision neuroai19_1_1,"""The authors use GANs, as an alternative to likelihood based approaches, to generate spike trains that match experimental data. Generating realistic spike trains is useful for both Neuro and AI. I defer to other reviewers on the GAN training procedure as I have limited expertise in the area. The paper is very well written and details are appropriately explained. Generating realistic spike trains is useful for both Neuro and AI applications. The authors use GANs to generate spike trains that match experimental data. Features of the data that are compared are 1) firing rates, 2) pairwise correlations, 3) population spike count histogram. The authors show that, on the training data, GANs can reproduce all features of the data very well. The authors also show that GANs outperform a dichotomized Gaussian model when fitting the population spike count histogram (although the difference is small). Areas for improvement: It might be informative to also show validation results and not only training data results. On a similar note, the next step of assessing transfer learning across data sets would be very interesting.""",4,1 neuroai19_1_2,""" A basic question in system and computational neuroscience is to come out with good models of neural spike trains. This paper introduces a new method to capture the response patterns of population spike trains, by using Generative adversarial networks (GANs). The techniques presented in the paper appears to be solid and convincing. The authors compared the proposed method with previously proposed models, i.e., dichotomised Gaussian model, supervised model. The authors found that GAN-based model is overall a good model. It can capture the probability of firing rate, pairwise correlation, spike count histogram, and the rasters. Other methods can fail in one or several of these aspects. The writing is clear. This paper uses a popular method in AI to address a classic neuroscience problem. Overall, I think this represents an interesting direction. Although the results are preliminary, it does look promising. I have several concerns/suggestions: 1. it would be helpful if the authors could discuss the applications of the method in the context of a neuroscience question, (maybe the even showing one real application)? 2. it would be useful if the authors could quantify the high-order correlation structure. By looking at the rasters in Fig. 2A, although it seems GAN-based models looks visually more similar to the real data comparing to others, to make the argument precise, it would be good to quantify it. 3. The improvement of GAN-based models over DG seems to be not totally convincing. """,3,1 neuroai19_2_1,"""The ideas and results presented by the authors are novel and impressive. Authors combined two biologically plausible concepts to construct their learning model: reward modulated Hebbian learning rule and working memory. It is an impressive fit that the model manages to achieve near-optimal performance as FORCE and also computationally cheap. While the results from the proposed model are impressive, to me I wish there were more in depth investigations on how bump attractor network is influencing the reservoir dynamics, and thus the convergence to the target signal with hebbian learning rule. For instance, the authors states in Introduction ""We propose stabilizing the reservoir activity by combining it with a continuous attractor"" and ""...feeding an abstract oscillatory input or a temporal backbone signal to the reservoir in order to overcome structural instabilities of the FORCE"", but I am unable to find satisfactory explanations for these statements in the paper. According to the original paper by Sussillo and Abbot, initial chaotic state of the reservoir actually improved the training performance for particular choice of parameter 'g.' Does chaotic nature of the reservoir only inhibit model's learning capability? Could there be some choices of chaos parameter that actually helps learning? It would be interesting to look into these questions to understand better about the role of chaos, and inputs from attractor for successful learning. The paper has few typos but is overall well written and easy to follow The subject of network learning rule for complex temporal sequences is an important topic for both AI and neuroscience communities. The concepts discussed in the paper are good examples of biologically plausible interpretation of such learning rules, which appeals to both communities. As mentioned in technical rigor section, more rigorous investigation of the role of attractor for successful learning rule would be great addition to the paper. """,4,1 neuroai19_2_2,"""This work extends earlier work on reward-modulated Hebbian plasticity in RNN by a latent bump attractor network, which helps the network to bridge long timespans. The work provides no in-depth analysis of the mechanisms underlying the improved performance. Overall, it seems a small improvement compared to previous work. The author(s) provide code, which makes the paper completely reproducible. Generally, the results seem plausible and sound. However, the putative mechanism behind the improved performance (slow dynamics of bump attractor bridging the timescale from short Hebbian plasticity to long timespans of the task), is only hypothesized but not actually studied. Also, a mechanistic understanding of how chaos is being suppressed during training is missing. The paper compares the novel learning algorithm on one single toy example to is predecessor, so it is difficult to see how its performance compares with alternative approaches. A huge bonus is that the author(s) provide code, which makes the paper completely reproducible. The problem is clearly explained. The details of the implementation are not described (time-step, adaptation parameter, gain parameter lambda (often called g for RNN)) but available in the accompanying code. The results are states clearly. Learning long-term dependencies is challenging in RNN both in machine learning and in neuroscience because it requires bridging the time-scale from single neuron interactions (milliseconds) to the duration of tasks (seconds). While in the AI field, this is these days usually being addressed by gated units, the solution proposed here aims to achieve this with biologically plausible local learning rules in combination with a latent bump attractor. The proposed solution seems to my knowledge to be novel in the reservoir computing community, it has probably only limited relevance to the AI community (because they would just use gated units) and but seems moderately relevant for the neuroscience community. The overall contribution of the paper is significant (in the sense of noticeable), but rather incremental. * The biological plausibility of working memory implemented by bump attractors generated by 2500 firing rate unit (which usually represent large populations of spiking neurons) is questionable. * It is not clear how ""u"" can be interpreted as a membrane potential when the entire network operates on the level of firing rates. * It is not clear how delays impede the suppression of chaos * It is not clear how this network would perform on other typical reservoir-computing tasks, e.g. the Romo task and how the performance improvements relate to ""hints"" given to the network during training (e.g. in full-FORCE). * The performance should be compared to other learning algorithms for training RNN, especially those striving for biological plausibility, e.g. feedback-alignment, Local online learning in recurrent networks with random feedback (RFLO) Murray 2019) and Eprop (Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Robert Legenstein, Wolfgang Maass, 2019). * Despite some shortcomings in the depth of the analysis, the paper altogether ""good"", especially the publication of the accompanying code is exemplary and enhances both understandability and reproducibility.""",3,1 neuroai19_3_1,"""Recent advancements in technology are making it much easier to perform the large-scale recordings of neuronal activity. This ""big-data"", which can consist of thousands of hours of electrophysiological signals, which can be difficult or otherwise impractical to analyze manually. Developing unsupervised methods that can reduce the dimensionality of this data and cluster together similar states is of critical importance. There doesn't seem to be any measure or statistics about how much the signal was denoised, besides what is displayed in the figures. Would it be possible to create a synthetic dataset with noise, and report how much the model improves the SNR under increasing levels of noise? Or even adding some noise to the original LFP signal. Additionally, there isn't much info given in regards to clustering. Fig. 3 shows the original output (with human labels) at the top and the reconstruction with labels at the bottom. Are the bottom labels based on some sort of unsupervised clustering of data in the latent dimension? If so, how was the number of clusters determined? I also have some concerns about the amount of training/testing data and number of subjects. The model was trained on one hour of data from a single subject and then evaluated in a second subject. How much data was used from the second subject? Is it more than the 100 seconds shown in Fig. 3? I think any claims about generalizability would require the use of some sort of statistical measure of performance, followed by evaluations in multiple subjects. For the VAE, is it possible to train with a smaller latent space? If you reduced the size to a small value (like 2 or 3), could you plot the data and see defined clusters representing respiration states? Also, how is the number of time-coordinates (n) determined, and how does performance change as you change n? The paper is nicely written for the most part. For the figures, it would be nice to see a zoomed-in comparison between the input signal and reconstruction at a shorter timescale (like 100 ms). For the methods, it says the loss function was MSE and the negative of the KL divergence. Could you clarify what you mean by this? Also, Reference 11 doesn't seem to match its citation. Research into using unsupervised methods to analyze electrophysiological signals is an excellent example of the applying AI to neuroscience, but it's difficult to evaluate this paper without any additional details on the VAEs performance.""",2,1 neuroai19_3_2,"""The authors consider a specific VAE, i.e. one with additional information going from the input to the output, i.e. the identity of subsampled low-value-time points in the data. Althgouh this approach appears to categorize the data using a small number of latents. It is unclear which parts of the data enable this to work. Simulation results are missing. Some details in the methods are unclear. No intuition is provided as to why the coordinate encoder worked! The authors do not mention if their latent space grows with n? Moreover, with the addition of the coordinate encoding, the latent space should be 40 instead of 20, if I understand correctly. The denoising is interesting, but needs to be compared with simpler methods like low pass filtering and penalized matrix decomposition. Which parts of the approach are actually important to this technique? The authors have only shown that the coordinate encoder works - which parts of this are important? More importantly, why? Why does this technique work? Simulation results would be very helpful. The manuscript is fairly clear, apart from the comments mentioned above. However, when it comes to the method of inputting the coordinate vector, it is quite unclear, even though this is the novelty of the method. It would be good to give an example or provide a figure of the procedure. Interesting addition to a common AI model, to denoise neural data. Definitely needs more work to be able to tell which parts of the model are important for the efficient representation. Promising direction.""",3,1 neuroai19_4_1,"""The authors shed light on how/why spatially distinct regions with different computational roles form in the brain. The authors show the interesting although not too surprising result that penalizing long-range connections results in networks that are functionally and to some extent topologically compartmentalized into spatially separated subgroups of neurons. The impact is diminished by the work lacking a roadmap to further inquiry. The mechanisms they use (primarily the l1 penalty on neural distances) are clearly described, as is the method for splitting the network into two subnetworks. Comparisons with other mechanisms (such as an l2 penalty on neural distances) would have been helpful, and it would have been a worthwhile endeavor to show that their model is in some sense the most natural or minimal model that generates the desired phenomena. The evidence provided by the authors that these subgroups are spatially isolated from each other is visual. Quantitative measures would have made the point more convincing. The authors address the issue of input encoding in a direct, thorough, and convincing way. There may be a better way to assign class labels to hidden neurons than the greedy algorithm they propose. One potential issue that I see is that a given neuron in layer l may have strong connections to two neurons in layer l+1 that themselves are assigned to the same class, but the outputs of these two neurons may cancel out in layer l+2. It might be worth looking at the change of the loss with respect to changes in the hidden layer neurons in order to assign the labels. The authors may have already tried this approach -- I couldn't really tell from the footnote they wrote on the matter. The technical aspects of the paper and the basic reasoning behind them are clear. However, I feel that the motivation behind the work and some of the technical choices that were made could have been clarified somewhat. More discussion as to the ""interpretation"" of their subgraph decomposition method, and comparisons with other possible ways to do this, would have been helpful. Since the results are largely what one would expect to see, it would have given the work's purpose more clarity if they had suggested future steps that could push the work further, or provided some sort of ultimate ""end goal"" for this line of inquiry. The paper as written is concerned with using artificial neural networks to help explain biological ones, without a clear path to closing the loop by going in the other direction. As such, doesn't seem likely to be very interesting to an AI researcher as written. This paper seems like a first, most basic ""sanity check"" that could be done to try to explain in models why the brain forms computationally distinct regions that are also separated in space. While ultimately they will want to take on the more ambitious goal of making the case that their proposed mechanism is truly the primary reason this happens, the scope of the work seems appropriate for a workshop. I think the work would have benefitted from a measure of how close the connections decompose into non-overlapping subgraphs, without taking into consideration the labels, and to show that these ""anatomical"" subgraphs are the same found through their strategy used to find ""functional"" subgraphs via backpropagating label assignments to the neurons. This could help answer the question of if the connectome is sufficient information for defining regions in the brain. As stated above, more discussion as to the ""interpretation"" of their subgraph decomposition method, and comparisons with other possible ways to do this, would have been helpful. Some discussion of recurrent connections in a paper meant to model the brain would have been beneficial.""",3,1 neuroai19_4_2,"""This is an interesting paper and perhaps generates more questions than it answers, such as connection with other local learning rules. The question being asked is certainly interesting and relevant. The technicalities are straightforward and justified. It would, however, be interesting to see more analyses on properties during training, including convergence compared to benchmark without the additional spatial losses. In addition, how robust is this finding relative to network architecture? This paper is well written. The concepts, equations, and connections are emphasized and generally understood. Figures are also simple and clearly understandable. The authors use an MLP trained through backprop with penalty constraints to see if the learned network can be split to solve separate tasks. The biological connection is with constrained learning in the brain based on spatial constraints. The results are interesting, although its not immediately clear what the contribution are for neuroscience / ML. This is an interesting paper that attempts to analyze learned neural representations when neurons have additional spatial properties which determine their connection lengths. The network is then trained with a loss function that accounts for the strengths of the connections and penalizes large distances. This forces the resulting neurons within one layer to cluster together during a two-task classification.""",3,1 neuroai19_4_3,"""By introducing a cost on wiring strength and length to a fully-connected feedforward neural network, the authors show that this network trained on two tasks simultaneously (MNIST and Fashion-MNIST) splits into two modular and spatially segregated networks. This result is expected and quite preliminary but nonetheless interesting and relevant to the workshop. Rigorous Clearly presented. Please connect your work to this relevant work: pseudo-url pseudo-url pseudo-url pseudo-url pseudo-url (just came out) Minor: in the cost function, I believe ""alphaT(l)+ L(l)"" should be ""alphaT(l)+ V(l)"". This work could be relevant both to understand biological neural networks as well as build more efficient artificial neural networks. Suggestions for next steps: Is the modularity an effect of spatial constraint or weight strength constraint? Try both indpendently. What if the network has limited capacity (fewer neurons per layer)? Does it share more neurons between tasks in this case?""",3,1 neuroai19_5_1,"""Validating the choice of the parcellations that we use for obtaining and interpreting our results in neuroimaging data analysis is an important problem. There are many recent and ongoing efforts in this direction. The analyses presented here do not further clarify the matter, apart from ""it's complicated"", and more importantly do not answer the (perhaps over-ambitious) question which makes the title. The results appear to be robust. What is less clear is their motivation and interpretation. The paper is generally readable, even though a discontinuity is felt going from the very general and ambitious objectives, to the very specific analyses without a particular comment, to the again general conclusions. There is a typo at the end: ""functionnal"" The application is to neuroimaging data, and some of the indicators (decoding performance and classification) are related to AI research. Maybe this work could be better presented in a more extended study, explicitly focused on some technical aspects, clearly referencing previous studies and novel results.""",2,1 neuroai19_5_2,"""The goal of the study is important and interesting: to assign quantitative criteria to the quality of a given brain parcellation. Interesting and clear discussion of the issues involved. Went a little long and left little room for results. One major conclusion is that AAL does not perform as well as other parcellations methods but does not make quantitative comparison. Nice choice of quantitative criteria for parcellation and the carrying out of comparisons is convincing. No apparent statistical tests of differences between the performance of methods. Unclear what error bars are (standard error, 95% CI, quantiles) so it is unclear whether differences large by eye. In figure 1 c it is interesting to see for the same, I assume number regions, there is difference in performance. Really gets at the nice point that some parcellations may be averaging over unrelated regions. Some accounting for degrees of freedom through a held out data set would have been nice or if there was some clear mention of it would have been good. Introduction was clear and nicely discussed issues of parcellation philosophical and quantitative. Wasnt clear what x axis in Figure 1 referred to. I assumed: dimension=number brain regions? Wasnt clear what individual points were on Figure 1 subplots (individual subjects?). What error bars corresponded to was not mentioned (SE, 95% CI). Could not find explicit discussion of relation of brain parcellation to AI or vice-versa. The extent of relation as far as I can tell is that machine learning techniques are used in the parcellation. Some discussion to point out why the connection is deeper would have been good. Would have been nice if there was more focus on results and firm conclusions then introduction to the topic, though the introduction was nice. Interesection to AI was never made explicit or discussed.""",2,1 neuroai19_5_3,"""The authors raise a very important question how well do brain parcellations map onto functional units? How we choose to discretize data has incredibly important ramifications for the conclusions we draw, and the authors recognize this importance. However, while the authors clearly evidence the importance of their question, it was unclear how important their specifics results were. There was no clear take-away from the article, thus making it difficult to gauge the overall importance. The authors did not provide much detail on their results. I appreciated their outline of the ways in which they empirically investigated atlas utility, but I could not follow their results and thus could not adequately interpret the rigor. The figures were not described or interpreted, and I wasn't sure what the axes corresponded to. I am also not familiar with all of these functional atlases, and thus not sure how they differ beyond the extraction method. The introduction described the general premise quite well. However, this section took up a majority of the paper. The results section made sense in terms of grammar, but the lack of detail made it slightly difficult to follow, and made the article as a whole unclear to me in terms of take-home message and scientific impact. There is a typo on line 17: yet it is also strongly associated. While some ML techniques were used in the extraction and decoding, it seemed relatively light on the AI front. Overall, the general question that the authors investigate is very interesting and important. However, the specific scientific contribution of this paper is unclear to me. """,2,1 neuroai19_6_1,"""Finding neural networks that are robust to loss of neurons/connections is an important problem. The results seem to be: 1) Removing nodes damages the network 2) Retraining can recover damaged networks 3) A damage-retrain procedure during training is better than doing repair later However, this paper is not well-written, technically poor, and lacks the necessary tests/clarity to convince me especially about point 3. No attempt is made to connect to dropout, despite it being very similar. The mathematics in this study are poorly laid out and not convincing. As is, the results would not be reproducible. The notation is sloppy and not well-described. It isn't really clear from the description exactly how the damage-repair procedure differs from repair strategy 1, ""functional sub-network training"". Both perform gradient descent on the non-damaged nodes; presumably the difference is how many gradient descent repair steps are made between damage steps. I also take issue to the emphasis that damaging the network is a phase transition. I think anyone could expect that damaging nodes will make the performance decrease, whereas phase transitions refer to a variety of other phenomena (scaling, etc.) that the authors have not investigated. Basically, it makes a straightforward result sound cooler. In the rescue sections (3 & 4), the number of repair cycles needed to rescue a network is plotted a few times. However, what constitutes ""rescue"" is not defined. I would guess it means # of epochs until performance is within some threshold of the performance before damage, but this should be stated explicitly. For the damage-repair strategy, I'm not sure we're seeing exactly the same thing plotted on the ""Retrain cycles"" axis in Fig 3. Do these correspond to the same number of iterations? How do these numbers compare to the initial number of cycles used to train the networks? Test error is never evaluated but could; everything is done with training error. Finally, the whole argument about a damage-repair attractor does not convince me. The damage operation permanently removes a damaged neuron from the network. Therefore, the network gets smaller with each damage step. So the only attractor I can imagine is the eventual fixed point with no undamaged nodes and no network. This is a serious problem, unless I've misunderstood something crucial. * I would suggest applying your technique to a state-of-the-art deep network rather than one you've trained yourself. That would be more convincing to the AI audience. * Figure 1: The MLP with 1 layer actually does quite well until almost all of the nodes are removed... it is quite robust. * L. 48 ""critical threshold"" is never defined. These don't look critical, since the phase transition is quite smooth down to removing all the nodes. * L. 79, usually one would use a matrix W for this collection of weights; W is just one layer in the network but this isn't explained. How is N defined? * L. 80, what is phi? * L. 81, Dim is not defined, I think you mean number of nonzeros (i.e. the 0-norm) * L. 81, w R^N is unclear; I would guess w is a d by n matrix with N nonzeros, but this is the wrong notation for that * L. 82, missing an equals sign after D_i(w), the piecewise is also wrong, you mean w_j = 0 if j =1 and w_j = w_j otherwise * L. 85, You never discuss picking a step size, or whether you use full gradient descent or stochastic gradient descent. These details are important. * L. 85, again missing an equals sign before the piecewise definition, messed up similarly as before. * L. 85, unclear why {i, j} is here, it'd be better to introduce some set S(t) for the damaged nodes at time t * Ll. 89 & 93, the probabilities are wrong, since you are only picking from the remaining undamaged nodes (right?). Also, I don't think writing the composition operation really adds anything here. * Figure 4: t-SNE can make things look nicer than they really are; are your results stable to other kinds of embeddings like isomap, etc? I do like the overall structure of the paper: first introducing damage, then simple repair, then the more complex damage-repair procedure. Yet the overall quality of the writing is mediocre. It is argued that a these networks are used for ""cognitive tasks"", but the only task evaluated here is the classical AI task of image recognition; cognitive tasks would be much broader in my opinion. There is discussion of neuromorphic chips. I am not familiar with the details of these, but is node failure a common problem with them? It seems like more of an issue for biological networks, but I am not sure. The figures are rather small with small labels. The authors did not follow the formatting instructions since they changed the margin size. When I printed the paper for review the line numbers cut off. Specific suggestions: * The introduction has a good amount of awkward language and run-on sentences that could use editing. * L. 23, ""powerful paradigms"" rephrase * L. 36, what makes this ""physical"" damage? * L. 39 ""simple cognitive task"" is a stretch * L. 51, title your sections with a declarative statement rather than a question * L. 62, write out ""versus"" There is some discussion in the intro about how neurons in real networks may be damaged, but as far as I can tell, this is the only real connection to neuroscience. Everything else is just standard computational methods for artificial neural networks. I realize this is harsh, but I got the impression the authors just searched around for a few neuroscience citations to add in to a paper which is mostly a computational study. The statement that in ""most biological systems ... networks are constantly being perturbed and repaired"" is debatable. I would say that yes neurons do die in the brain throughout our lifetime, but many of them stay around for a very long time, too. It probably depends on which part of the nervous system you are talking about. I think the authors could have conducted a more thorough literature review and found a lot of work on biological networks that study node removal, since I am aware of at least a few papers along those lines applied to traumatic brain injury. However, if I were to be generous, I do think that biological systems probably apply principles like ""learning under constant damage"". But the mechanisms would be different and perhaps related to the stochasticity of neurons (wild speculation). There is little evidence that real neurons can do backpropagation, even though it's a perennial topic at NeurIPS. I think the question of damage is of course relevant to both AI and neuroscience. I just don't think the authors have made a convincing case that their algorithm is inspired by any biological principle or could be taking place in the brain.""",1,0 neuroai19_6_2,"""Although not explicitly discussed in the paper, it seems that an important hypothesis suggested by this work is that the constant rewiring observed in the brain might be a mechanism for building resilience to structural damage into neural circuits. From the perspective of AI, it is not clear that the kinds of damage considered here are actually a problem for hardware implementations of neural networks, limiting the importance or applicability of the proposed iterative damage-repair algorithm. The damage-repair algorithm proposed in this paper is principled and well demonstrated to achieve its goal (figure 3). However, because the algorithm is never clearly delineated, it is not clear how its data requirements compare to standard training schemes - are more epochs needed to train a network under the iterative damage-repair paradigm? Or can exactly the same amount of data be used as when training the network using standard SGD? If the former, then the interpretation of figure 3 changes drastically. The significance of all the other results in this paper are far from clear. Figure 2 seems little more than trivial and irrelevant to the subsequent results. The significance of figure 4 is not well established, and moreover not discussed in the context of extensive previous literature on the existence of local minima in the loss landscape of deep networks. Moreover, it is not obvious that an alternative scenario to figure 4 could be possible, given that these networks differ structurally along a presumably relatively small set of gradient descent steps. Additionally, the non-linear dimensionality reduction could hide complications in this picture. It seems like some kind of control experiment is missing here to clarify the meaning of these results. Lastly, the nearly one page of mathematics mainly consists of definitions, without adding any rigor to the results or arguments ultimately made. That said, the authors do explicitly state in the discussion that the presented formalism may provide a basis for future research. Color code is inconsistent across figures. A thorough description of the ""iterative damage-repair paradigm"" is never provided, leaving a lot for the reader to infer. Moreover, presentation of the performance of this algorithm (i.e. section 4) precedes the formal description of the algorithm, which reads awkwardly. The results presented in each section are never clearly related to each other. Two important properties of biological neural networks are (1) their resiliance to damage and (2) the constant turnover and change of synaptic connections. This submission addresses how to build such properties into artificial neural networks using gradient descent. The results thus constitute a method to incorporate a property of biological neural networks into artificial neural networks. Unfortunately, few advantages to this are explored beyond resilience to a kind of damage unlikely to occur to artificial neural networks. Alternatively, one could interpret the results as providing a machine learning-inspired solution to understanding the nature of these biological properties. Unfortunately, this interpretation is hardly discussed or explored. As written, it is far from clear what the contributions of this submission are meant to be. The presented results range from providing very shallow to potentially deep insight, at best connected through future research to be done. As such, the work feels very preliminary in nature.""",2,0 neuroai19_6_3,"""The authors show how iteratively damaging and repairing networks produces more damage-robust networks and connect this to invariant parameter sets and connected paths through the loss landscape. This is a useful direction for making AI systems robust to physical damage and to understanding the geometry of the training loss landscapes. The idea to characterize damage and repair as operators on the weight matrix is also novel, and could maybe be worth further investigation. While suggesting the potential for rigor, the authors' attempt to characterize damage and repair as operators on network parameters did not end up making the approach feel more principled. The authors would have been better off exchanging some of the more advanced mathematical ideas (which did not seem to add to their arguments) for more detail about e.g. the loss function and specific properties of the repair operator. Further, the authors compute network performance only in terms of training accuracy, raising doubts as to how well damage and repair affect performance on test data. One thing I also did not understand was how a network that receives continual damage by node deletion would remain robust to damage, since eventually there would be no more nodes remaining. The idea of connected paths through the loss landscape was also interesting but not properly detailed. As written, the use of damage and repair operators and the connection to topological spaces came across as a rather forceful attempt to use abstract mathematics. The result was unfortunately confusion rather than simplification. The use of unconvential notation (e.g. calling N the number of weights) also hindered understanding, and the network architecture was hard to understand, e.g. whether i referred to nodes or layers. Further, presenting long- and short-hand notation near line 93 seemed to serve no obvious purpose. Figure 4 was also rather hard to parse. Generally, I had to reread most paragraphs to understand what was being communicated. The authors motivate resilience to damage based on biological considerations, but that is the only neuroscience connection. The idea of making AI systems more robust by continual damage and repair is certainly interesting, and the authors argue that it is feasible and effective. I also appreciate the attempt to connect this work to geometric features of the loss landscape. As written, however, the manuscript was rather hard to follow and did not seem very convincing. It also would have been nice to connect this idea to the much more well studied notion of dropout, as well as to discuss the effect of the damage-repair scheme to the network's response to adversarial inputs.""",3,0 neuroai19_7_1,"""This paper convincingly shows how graph convolutional networks can provide very robust and interpretable predictions of cognitive states from brain activity. This is simply a solid paper which includes various control analyses. Clear and well-written Nice application of GCNs for cognitive state decoding. Thorough theoretically rigorous application of a new approach for rapid decoding of cognitive states.""",4,1 neuroai19_7_2,"""Neural decoding is an important problem with applications throughout neuroscience. The methods appear to be correct and are well explained. The paper is well written but is well over the limit for this workshop. Its clear that the authors didn't try to shorten their paper for this workshop. Figure 2 says ""Sates"" instead of ""States"" Interesting use of ML methods to analyze neural datasets. The contribution is relevant to the workshop and is well written.""",3,1 neuroai19_8_1,"""-- Development of new methods to distill DCNN computational strategies is crucial. The present method is interesting and appears to generate testable novel hypotheses. -- Appears rigorous. I leave it to reviewers with more expertise in pre-existing model reduction techniques to evaluate the novelty. -- Text is written articulately, figures are exceptionally rich. -- Although the CNN used appears to be that from Maheswaranathan et al. (2018), this would be a more self-contained submission if it included some of the details about that model and its training data. For example, there is nothing in the present submission which even specifies which species' retina the CNN is a model of (aside from a non-sequitur mention of salamanders in the text, and an odd-looking creature in Figure 3). Nor is there any detail about the image stimuli to which neural responses were recorded, even though the particular spatiotemporal structure of the stimuli seems to be critical to the model reduction method. -- Development of model summarisation and interpretation methods is of critical importance to making AI-based systems useful models in neuroscience. -- A great deal of the text, in both the abstract and manuscript, is given over to bombastic excitement about the power and virtues of the method. I understand the desire to communicate the potential of a new method, but this would be a more informative, self-contained piece of work if more space and attention were dedicated to describing the methods involved in the proof-of-principle study reported. """,4,1 neuroai19_8_2,"""This work advances previous work that introduced a three layer convolutional neural network (CNN) model that both reproduces retinal ganglion cell responses with high accuracy and whose hidden neurons are well correlated with retinal interneurons, by elucidating the computations that are happening between input and output neurons in both the CNN and retina recordings. Issues with clarity make it hard to deduce how successful they are in achieving their stated goals. The authors introduce a decomposition of the output firing rate into a dynamically weighted sum of the pre-nonlinearity activations of hidden units. This derivation is hampered by an important symbol not being defined (the caligraphic F). Perhaps in equation (1) the caligraphic F was actually supposed to be a caligraphic A. It's not stated if this decomposition is unique. The derivation seems to follow from a use of the Fundamental Theorem of Calculus followed by the chain rule applied to the caligraphic F, but they seem to use that this F evaluated at zero input stimulus is zero, and it isn't clear why this would be the case with a nonzero bias. The dynamically weighted sum is truncated based on the magnitude of the terms. Since these terms depend on time t, it isn't clear how this magnitude is taken. While the introduction to Section 2 describes the method as ""we first carve out important sub-circuits using modern attribution methods, and then reduce dimensionality by exploiting spatial invariances..."", in their actual approach it seems like they exploit spatial invariances to reduce dimensionality before carving out the sub-circuits. One important comparison to have made I think is with a model that builds filters based off of the input stimuli themselves rather than the hidden unit preactivations. How useful is doing the latter compared to the former? The clarity of the work suffers both from missing technical details (see Rigor Comment) as well as inadequate interpretation of the results. Since I've already addressed the former, here I'll focus on the latter. First, it isn't clear to me how novel equation (1) is. This equation, as far as I can tell, charts a way to making an optimal filter that takes in hidden unit activations before the nonlinearity is applied and outputs a response r(t) that is close to (or exactly equal to) the true response. Since the topic of building optimal filters is a rich and well-explored one, it is essential to contextualize equation (1) within this body of work. Second, it is difficult to interpret how to think of the ""effective weights"" they derive, and the resulting effective filters. In particular, the curves that are plotted in Figure 2, for instance in A-1, aren't very well explained -- are these the preactivations of the hidden units, or the filters applied to these preactivations? Why is it that the curves found in the reduced ""three hidden unit"" model in Figure A-1 aren't found among the filters in the ""eight hidden unit"" model of figure A-1? Third, it isn't clear to me how the reduced ""filter model"" they derive is mapped back into a CNN framework. The paper claims to provide a method to ""algorithmically extract computational mechanisms, and consequently conceptual insights, from deep CNN models"" by extracting ""a simple, reduced, minimal subnetwork that is most important in generating a complex CNNs response to any given stimulus."" However, the ""subnetwork"" does not appear to me to actually be a subnetwork of the CNN, but rather a different, simpler network model that only overlaps with the CNN at the first layer. Can we really be sure that the CNN is implementing the filters as derived in the reduced model? Validation on a held-out test set may be needed to test this hypothesis. The paper seeks to shed light both on what is going on inside both artificial as well as biological neural networks. Their finding of a new neural circuit mechanism that implements omitted stimulus response more robustly certainly sheds light on the latter, but I don't feel like they made a very clear case when it comes to the former. This work seeks to take a significant step in a very interesting and important direction, but issues with clarity make it hard to deduce how successful they are in making this step. The discovery of a new circuit mechanism that implements omitted stimulus response more robustly feels like a very significant contribution, although my lack of familiarity with this body of work makes it hard for me to judge with confidence.""",3,1 neuroai19_8_3,"""The authors state three high-level improvements they want to make to CNN-based models of neural systems: 1 & 2) Capturing computational mechanisms and extracting conceptual insights. Operationally, I'm not quite sure how these are different, so, to me this goal is roughly ""be explainable"", and progress towards it could be measured e.g. in MDLs. 3) Suggest testable hypotheses. I agree these are good goals, and I think some progress is made, but that progress seems somewhat limited in scope. The technical aspects of the paper seem correct, though I have some higher-level conceptual concerns. 1) If I understand correctly, attribution is computed only for a single OSR stimulus video. Is the attribution analysis stable for different stimulus frequencies? If not, is it really an explanation of the OSR? 2) I agree with a concern raised by reviewer 3: It's difficult to see a 1-layer network as a ""mechanistic explanation"" of a 3-layer network. The flow/high-level organization of the paper works well. Explanations are mostly complete, though some details are missing. e.g. what was the nonlinearity used in the model CNN? Also, do the CNN layers correspond to cell populations, and if so, why is it reasonable to collapse the time dimension after the first layer? I believe this paper is addressing questions that many of the workshop attendees will find interesting.""",3,1 neuroai19_9_1,"""This paper shows how lateral connections can make neural networks more robust to noise (in the data) in the setting of classification. Although the results are not necessarily groundbreaking, they provide a useful demonstration of where these lateral weights could assist in modern deep learning architectures. Overall, the authors are fairly rigorous in their evaluation, performing a reasonable set of experiments that compare different schemes (lateral vs. non-lateral), noise levels, noisy types, and datasets. A couple things stick out somewhat: 1) from Table 1, it appears as though the authors use a deeper architecture for their model, which may confound their results, and 2) the authors claim that they observed sparsification of feature activations with their method, however this is not backed up empirically. I would be helpful to include these results in order to make this claim. The presentation is, overall, quite clear. Further details in the experimental set-up and weight derivation would be helpful in the final version. The paper primarily uses lateral weights as a mechanism for improving robustness in neural network models. There are some comparisons with neuroscience data in Figure 1. While these findings may be helpful to a neuroscience audience, the paper seems primarily geared toward the machine learning community. While minor aspects of the paper could be improved (see other comments), the paper seems like a useful contribution to the workshop. One additional thing that would help in strengthening the submission would be a discussion around other normalization schemes currently used in deep learning, e.g. batch normalization, layer normalization, local response normalization. Comparing and contrasting these ideas, even at a conceptual level, would be useful.""",4,1 neuroai19_9_2,"""Interesting Neuro->AI proposal to add lateral connections like in cortex, and quantify potential functional role Preliminary but interesting results on MNIST Less convincing for CIFAR-10 Statistical significance of improvements due to proposed method for CIFAR-10 needs further scrutiny. Comparison of the proposed lateral hebbian rule to previous work, e.g. neocognitron? Lacks discussion A good example of Neuro->AI and back. Ideas from Neuro, functional interpretation from AI application.""",3,1 neuroai19_9_3,"""The idea of adding some kind of lateral connectivity to a deep learning model has been explored previously (e.g. in AlexNet), but it's always interesting to see work along these lines. However, the rationale behind the existing approach doesn't seem to make much sense, and the evaluation is inadequate to show whether it works. I do not think there's much for future work to build upon in this submission. The proposed ""rule"" does not make much sense. First, it's not actually a learning rule, since it's not an iterative update, and it's not clear how it's implemented in practice (and whether the authors backpropagate through the computations involved). Second, the derivation seems to rely on a strange idea about how neural networks work. It refers to ""the probability of the coded feature,"" but this doesn't make much sense since the real-valued features are passed directly to the next layer and not sampled from. There also seem to be a massive number of assumptions involved, which are not justified and do not make any intuitive sense, e.g. ""each patch provides independent information"" and p(F_k^2)$. MNIST is a toy task and methods that improve robustness on MNIST do not necessarily transfer to other datasets, but the evaluation here is mostly reasonable. The original network is only slightly worse than a LeNet-5 baseline. In this setting, CNNEx seems to marginally outperform the network without weight decay and dropout at high distortion levels, but the proposed method is less effective than weight decay. When the methods are combined, it looks like the parameter-matched CNN outperforms CNNEx at most distortion levels, but the proposed method still yields gains at larger distortion levels. However, it is not clear to me why the parameter-matched CNN baseline consists of adding more filters to the baseline CNN, rather than training the ""extraclassical receptive field"" layers by backpropagation. On CIFAR-10, the baseline achieves around 60%. This result is much worse than the 84.4% result reported for a three convolutional layer network in the dropout paper. The network likely performs poorly because it has very few filters and was trained for only 10 epochs. It's hard to know whether gains in this rather unrealistic network would translate to a network with more filters, much less a modern image classification network. But also, there aren't really gains in this setting. The accuracy in noise without weight decay is <1% vs. the parameter-matched baseline, and there is no advantage to the proposed method over a CNN with weight decay. The paper is generally readable, but there is one major missing detail: It is not clear to me how the ""learning rule"" in Eq. 2 is actually implemented. It doesn't seem like it's actually an iterative learning rule, since there is an expectation over images. At least in a typical ML training setup, it's intractable to perform a forward pass for all training images at each set. Assuming the training setup is typical, do the authors backpropagate through the covariance computation on a minibatch and use full-batch moments at test time, or do they use some kind of exponential moving averaging over training? This paper involves a neuroscience-inspired idea applied to a neural network. However, the implementation of the general idea has little relationship to neuroscience, and it's difficult to link the results to any insights that would be useful to neuroscience. Thus, the relationship seems surface-level. This submission proposes introducing ""extra-classical receptive field"" layers into neural networks. These layers apply weights that integrate over a slightly wider area than the preceding convolutional layer. Although the idea of extraclassical receptive fields is potentially interesting, the ""extraclassical receptive field"" here is just a convolutional layer that is trained in an ""unsupervised"" way that is neither well-described nor well-justified. There are significant problems with the evaluation that make it difficult to draw meaningful conclusions about the performance of the method, but it does not appear to meaningfully improve performance or robustness on CIFAR-10. Strengths: The proposed idea seems to modestly improve robustness of a small network trained on MNIST to large magnitude noise perturbations, versus a baseline that isn't totally fair. Weaknesses: The theoretical justification for the proposed technique equates the activations in hidden layers of neural network to probabilities that a feature is present, which is nonsensical. The hidden layers of a standard neural network do not implement probabilistic inference, and only the output of the network can be directly treated as a probability distribution. It is unclear how the weights of the proposed model are implemented in practice. The parameter-matched baseline is chosen to be architecturally different from the CNNEx model, rather than the same model with weights trained by backpropagation. The CIFAR-10 experimental setup is unconvincing. The baseline is worse than the best result reported in the 2009 paper that introduced the dataset, and it is unclear whether the results would generalize to networks trained for more epochs with more layers and more filters. The proposed method does not appear to achieve meaningfully better performance than the baseline on CIFAR-10, and actually performs significantly worse than the baseline when weight decay is applied.""",1,1 neuroai19_10_1,"""It is interesting to come up with an automated animal training platform to understand animals individually. They use generalized linear model to learn the decision-making online and also analyze how pre-trial movement trajectories could affect the decision-making. The idea is potentially useful but the current model and tested task are basic. In line 51, the authors claim they test the mice to learn a variety of decision tasks but only report a simple binary visual task and doesn't mention how to generalize to more complicated task. There was little intuition and without any explanation to show why GLM is preferred to other methods. Also, since there is no comparison to other methods at all, it is not clear that 80% accuracy is high or low for a binary visual task. In figure 2, the dashed boxes on the left overlap with the number. In figure 3(d), the factors A,B,C,D,E are a little bit confusing and should be consistent with previous explanation. In general the figures are understandable. While the authors use machine learning techniques to model and analyze decision-making policy of individual animals, the inference model is based on the animals choice and movement trajectory instead of the activity of real neurons. The hardware setup is good and the direction of understanding individual animals is interesting. However, more rigorous tests and analyses are needed. It would be also good to see how these understandings can improve the animal training.""",2,0 neuroai19_10_2,"""In this paper, the authors developed an automated training process for mice, and analysed the decision-making behaviour by training GLM on decision related variables, and a classifier on outputs from Deeplabcut. Automated training of rodents is now practiced in more than several laboratories (eg. Winter & Schaefer, 2011; Poddar et al., 2013), and the behavioural analysis done here is pretty standard too, so the novelty of this work is rather limited. It is probably just a clarity issue, but it is unclear whether the 80% performance they observed in section 4 is really significant from the manuscript alone. When 80% of the trials are the correct trials, even without training any classifier, you can trivially predict the performance with 80% chance. The authors should mention how the bias was controlled. Fig. 4 c-d are not clear. What does the distance (y-axis of Fig. 4c) mean? And how can we tell that the cyan cluster correspond to hesitation, not inattention? Behavioural analysis was done with popular methods for which no contribution has done from this particular work. Because of that, I think the level of intersection is quite low. Although the preliminary results presented here is not particularly novel nor significant judging from the manuscript, the experimental protocol they set up is quite neat, and I hope the authors will address interesting questions using this system.""",2,0 neuroai19_10_3,"""This paper analyzes the ability to predict of a mouse's choice in a self-initiated 2AFC. Two different types of models are studied - a GLM with various terms including current stimulus, as well as a model with bodyparts tracked using DLC. While the models used are interesting, and the potential behavioral result is interesting if true, there should be a lot more analysis on the results. At this stage, the findings are currently uninterpretable and unclear. Description of methods is lacking. The methods section for the behavioral analysis (Section 4) really needs to have more details in it for us to judge the importance of the result. A GLM is more commonly used for predicting future choice. If indeed the movements are offering much more insight than the GLM, especially the GLM without the stimulus information (easier task), then it's an interesting result. There is no comparison of the behavioral model to anything else currently, including chance levels. How much more can the behavioral model capture than the GLM model, if any? Was this run on all trials or just a subset of the trials where the mouse was performing at or above 80% anyway? Why was the behavioral model performing as well as it was? If the authors used all the time right up to the stimulus and the mouse didn't move between the stimulus and the choice, the mouse would already be at the chosen port and it would be easy to tell the choice. The methods and result sections are very unclear. For the purposes of this conference, there is a focus on the two types of models to predict the choice in the results section, but the focus of the other sections was to build the automating of tasks and animal identification - it is unclear how these two tie together. How exactly would the prediction of the animal's choice using either the GLM or the behavior recording help in adaptive training? Moreover, the models have obviously been built post-hoc. That being said, there is potential in this analysis- if the authors concentrate on the prediction of the mouse's choice using (a) stimulus and (b) movement parameters, and make statements on how the behavior gives us further information about the mouse's future choice than just the stimulus and previous choices, then this paper could have a nice message. Tightening up the message of the paper and focusing on the results of the modeling effort, and explaining these results a lot better, as opposed to spending valuable space detailing the automating of the rigs, would go a long way. 1) What is A,B,C,D,E in Figure 3d? - any interpretability in this figure would be great. 2) What do the different clusters correspond to in Figure 4, apart from 'hesitation'? 3) What is the 'distance' (cm) in Figure 4c? 4) How does the accuracy in Figure 4 compare to the accuracy values in Figure 3? 5) A lot more information about Figures 3 and 4 is needed. A variety of machine learning tools are used in this study, from DeepLabCut to t-SNE to a boosting classifier. Not sure if that is enough for 'intersection'. Focusing on the methods and results of the behavioral analysis as compared to the more common GLM analysis would strengthen the paper - it is an interesting premise that deserves to be better explored.""",3,0 neuroai19_11_1,"""Decoding speech (or intended speech) from neural signals is an important problem with wide potential clinical applications. This study moves towards a more naturalistic setting by decoding a larger set of words than what was previously attemped. This is an important step towards a useful device. The main issue I have with the paper lies in the description of the experiment. It is not clear what constitutes a ""trial"". What are the lengths of the time bins after computing the spectral power? This is presumably coarser than 30 kHz. What data are used for training versus testing, and are these random bins or non-interleaved segments of the total timeseries? From the description of the loss as cross-entropy, it sounds like the output target for the LSTM is phoneme identity. What is the total number of phonemes? It is unclear whether the language model built on top of this is actually trained on the subject data or the phoneme to word map is just a result of the Brown corpus + CMU dictionary. This needs to be made clear. Similarly, it sounds like the goal of the study is to decode words from a very large corpus. However, the experimental design section states that the subjects only performed trials where they said ""yes"", ""no"", or phoneme strings which presumably don't map to real words. So what is the part where you try and decode their actual speech? Is it from observations of the rest of their hospital stay while they are speaking with family, friends, doctors, and nurses? Is this what's tested after only training on the phoneme string data? This should all be made explicit. I found the level of detail in the LSTM section more than necessary, and the explicit parameters of the ADAM method and learning rate could be left out. (As is, the level of detail in this paper is not enough to make the study reproducible, but I am glad to see the code will be made available. With the code, you don't need these parameters in the text.) Instead, I would use the space gained to explain more the basic experimental setup and algorithm design. On the other hand, there could be more detail about the smoothing + particle filter steps. Is the automaton model a Markov chain? Figure 1 doesn't add much in my opinion. You could add a lot more information by creating a diagram of how phoneme + automaton gives the word output. Adding some math like the PF update equations could make everything more precise. The paper is generally well-written with correct grammar and good explanations. My main issues are with the technical clarity (see technical comments). Small edits: * L. 1, I'd add ""may"" before help * L. 2, could strike ""output"" from ""speech output"" * L. 33, ""the speech cortex"" -> ""speech cortex"" * Ll. 39-42, this last sentence is a run-on. Also, it is missing a comma before ""i.e."" * L. 48, use backticks `` for open quotes in Latex * L. 60, missing a comma ""Across multiple subject, vowels..."" * L. 63, add ""The"" to start of 1st sentence * L. 64, ""Futher"" -> ""Furthermore"" * Figure 1, typo ""nodes"" This is an excellent example of an AI method applied to an important neuroscience problem. This is a nice paper as is, and will be better if the authors can address my issues with the technical clarity. With some more effort given to formatting (like removing double-spacing of bibliography), there should be space for these improvements. I would like to hear what the authors think about decoding speech, where the subject actually says a word, versus decoding intended speech, where the subject only imagines saying a word. Intended speech sounds more difficult to me, since there won't be any motor signals. This is an important challenge to overcome for someone with motor impairment. Is there any way to interpret the signals that this framework learns? Can you tell which electrodes are most relevant for the decoding task, and are these in speech or motor areas? This is probably something that could go into the discussion.""",3,1 neuroai19_11_2,"""This is a very important question. If the accuracy is high, an algorithm can increase the life quality of disabled or stroke patients. The authors used a set a standard RNN (LSTM) network to use neural data (LFP) for word detection. It is a bit unclear how they select their features. It is also unclear how their algorithm perform (37% accuracy) compared to other algorithms. It is thus hard to judge the significance of result. The text is very brief in terms of experimental detail and methods. It is challenging to figure out what exactly the algorithm did and the advantage of this particular algorithm based on the results written. The authors tried to use RNN to encode LFP signal for speech recognition.""",2,1 neuroai19_11_3,"""The goal behind the paper is very useful: being able to decode complex speech and not just choose an option from a limited pool. A solution which allows patients to communicate easily and rapidly would greatly improve their quality of life. Using a language model or any kind of recurrent model to use past brain activity to predict more accurately is a crucial direction and the authors are correct in pursuing it. The experiments appear convincing. The limited space doesn't allow a very deep understanding of all the procedures that were used. The paper is well written. This work is an example of using AI tools to decode brain activity and isn't about using AI as a model of what the brain is doing. It is a nice illustration of the role that AI could achieve in that domain. This is an interesting and promising approach. The problem that is addressed is important and the methods seem sound. However, the authors limit their analysis to very simple stimuli (yes and no and some non-word sounds) which reduces the impact of their work (they motivate their approach as not being constrained to decoding out of a small pool.""",4,1 neuroai19_12_1,"""-- Interesting and timely to explore the architectural design space between models constrained to match biological vision, and those constrained to perform static-image object-classification well. No major insights yet from these particular results, but it is helpful to see that the performance gulf is likely due to many small features, rather than any on single architectural difference. -- Architectural and training details described in reasonable detail given the available space. -- The second set of experiments, testing how well the various architectures match primate brain data, seem rather cursory. Given that the motivation given for choosing Densenet as a target architecture was its relatively high BrainScore, it is odd that BrainScore (or any of its subcomponent scores) is not calculated for any of the intermediate architectures. The only things presented towards this are qualitative histograms of size-tuning and sparsity in the two endpoint architectures (VSN and Densenet). Representational dissimilarity matrices are also shown, for these two architectures only, with no quantification of how well either of these predicts macaque or human matrices. -- Generally clearly written. -- Figures could be more clearly labelled. E.g. a title in Figure 2 indicating that these results concern sparsity. In Figure 3A, an indication of what the red bars distinguish (I still don't fully understand even from the caption - ""red means >= 4""....but then this same convention is not applied to the DenseNet histogram?). In Figure 3B, does the red vs blue colour coding indicate anything, or just distinguish single-neuron from population plots? If the latter, these would be better all blue, as the current colour scheme suggests some correspondence with the red vs blue bars of the size-tuning plots in 3A. -- Representational Similarity Analysis in Figure 3C is very unclear. Figure caption describes this as a representational *similarity* matrix, but then says that macaque data show ""low values"" for inanimate vs inanimate pairwise entries, implying that the matrix actually shows *dissimilarity* values (but which metric?). Caption should be clarified and figure should include a colour map legend indicating what distance measure is used. -- Strong combination of neuroscience and machine learning. Constructs a continuum of models stretching from maximally-biologically-informed, to engineer-optimised, to try to resolve a discrepancy in performance between the two approaches to network architecture choice. -- A nice idea, to systematically explore the space between biology-optimised and computer-vision-optimised DCNN architectures. -- There seem to be some simple ways to improve the set of experiments comparing the networks to macaque/human data. For example, by calculating the representational dissimilarity matrix correlations for each of the intermediate architectures to human and macaque data from Kriegeskorte et al. (2008)'s Neuron paper. -- The conclusion that the ventral stream may not be ""optimised specifically for core object recognition"" seems like a bit of a leap. At most, the poor recognition performance suggests the ventral stream may not be optimised specifically for Imagenet- or CIFAR-like tasks in which one must name the main object in decontextualised static images. Put this way, it seems almost certain that mammalian ventral stream is *not* optimised specifically for this task. A more naturalistic definition of ""core object recognition"" might be recognising the identity and properties and affordances of objects in dynamic visual input, which the ventral stream likely is optimised for.""",3,1 neuroai19_12_2,"""After the field's initial success of broadly modeling the ventral stream, this study analyzes which tweaks make a data-driven ventral-stream-like network built to resemble architectural details in cortex better at performing on CIFAR-10. There are still major differences between ML models now used to predict brain activity and the actual implementation in cortex and this paper takes first steps to bridge that gap. The paper starts from a previously published ventral-stream-like network and cumulatively changes its architecture to resemble DenseNet, which is a high-performance ML model with high Brain-Score (i.e. it predicts neural populations + behavior). The cumulative changes are convincing and more and more closely approach DenseNet CIFAR-10 performance. I would have liked to see all of the changes by themselves instead of only in aggregation to better identify which changes are important or whether it's really the interplay of all those changes that improve performance. Overall though, this analysis is well-done. Figure 2 tests the effect of sparsity on accuracy and finds that more sparsity harms performance. This is an interesting finding, but I would have liked more controls on the network size: i.e. if you increase the network size, can you still train to remedy the accuracy losses from increased sparsity. The comparison to macaque neural data in Figure 3 offers a nice fine-grain view at differences of classical measures used in neuroscience (size-tuning bandwidth, single-unit selectivity, population sparseness, and RDMs). However, the results of this analysis are difficult to put into context since model-match-to-brain is not explicitly quantified, but rather only relies on visual comparison. Additionally, the macaque data is only shown for size-tuning, but not for the other three properties. It is thus hard to say which of the models matches the brain more closely. Overall, it is clear what steps the authors take. The following points were confusing to me: 1. The first paragraph of the results describes the different changes to the architecture with a lot of text. This would be easier to digest with at least a list (one item for H1, H2 etc.), or ideally a table that clearly marks what the differences are, starting from the ventral-stream-like network, going over H1 H2 etc., to the DenseNet architecture. 2. line 46 states that there is a readout-head for ImageNet, but the paper only shows CIFAR-10 results The paper compares a neuroscience model (data-driven ventral-stream-like network with cortex-like architectural features) with a Machine Learning model (DenseNet; which has also been shown to predict neural activity and human behavior) and qualitatively evaluates both on macaque data. This general approach is a good direction to combine bottom-up cortex-like architectural features with top-down large-scale neural networks from Machine Learning. One interesting finding is that physiologically realistic connection sparsity seems to stand in contrast with high task performance. I wonder whether this means we simply need to build bigger networks to accommodate the sparsity or where the mismatch is coming from. Bridging bottom-up models from neuroscience with top-down models from Machine Learning is a promising direction. This paper shows some interesting results on which architectural changes need to be made to the bottom-up model, and finds that increased sparsity harms performance. I would have liked to see * H1-6 individually to isolate what improves performance by itself/only in combination * the sparsity analysis in Fig. 2 controlled for network size * the comparison to neural data quantified, as it is otherwise very hard to make any judgments about which model is better than the other""",3,1 neuroai19_12_3,"""The authors investigate various architecture manipulations of a ""ventral-stream-like network"" and its effect on object recognition performance. Despite the more ""biological"" architecture, the VSN was not more similar to IT responses. A comparison to V4 would have been useful and increased the importance. Rigorous study. Performance is greatly improved by adding dense connections. Is this a consequence of the increase in the number of free parameters? I think more explanation of the VSN would have been helpful. Explores what sorts of manipulations hurt or help object recognition performance.""",4,1 neuroai19_13_1,"""This could lead to important work that allows for a more objective approach to understanding cell types, but for the moment it's just an early proof of principle without any strong results. Approach seems reasonable, but I note that the method only seems to work well on simulated data if cell classes are quite well separated, which is probably not the case in real brains. Quite dense but seems to contain all the relevant information. This seems more like a classic neuroscience / statistical approach (E-M algorithm to fit a filtering model to data), but certainly not unrelated to ML. I think this work is too preliminary for the moment. Either it needs some results, or it needs to make a more thorough and convincing case as a proof of principle.""",2,1 neuroai19_13_2,"""Taking the idea that there are multiple, discrete cell types into the fitting of GLMs of these models is an interesting and important idea. Frequently, the investigation of discrete cell types is taken into account after fitting models (typically to see if there is any evidence of discretization), and I think there could be something to gain from using this feature as a prior. However, I was not fully convinced that the results presented in this paper made substantial gains on this question. The work presented - the GMM-GLMs and the methods used to fit these models - seemed quite rigorous. I appreciate that the authors included details in their algorithm and fitting procedure, but this section was a bit dense to read (which might be necessary but I also wish that some of this text was exchanged for a focus on the specific advancement of this work in understanding neural data). The testing of the model-fitting on simulated data is very key here. The introduction/motivation section was relatively clearly-written. I found section 2.2 to be a bit dense and difficult to read (and I believe that not all of the symbols used were defined). The figures were relatively well-described and easy to interpret. This paper focuses mostly on techniques to fit GMM-GLMs, and feels mostly like an intersection between ML and neuroscience. I think that intersection is very interesting and fruitful, even though it might not be ""AI"" and neuroscience specifically. A substantial focus of this paper was the fitting of the GMM-GLM, which is somewhat interesting in its own right, but the specific scientific advancement of this technique isn't wholly convincing to me, and therefore I am a little unconvinced that this technique is worth the effort. In particular, I am not convinced that folding in the cell-clustering aspect to the model fitting doesn't do more than regularize the self-interaction filters. Further (and relatedly), I am also not convinced that there is clustering in the self-interaction filters of the neural data (Figure 2B looks like lines were drawn in the sand, although this could be a result from the PCA projection). Overall, it seems that almost the same amount of scientific inference can occur if individual GLMS are fit, and then the interaction filters are clustered afterwards. What do we gain by doing it the way presented (which feels much more complicated)? In addition, this may be a point of semantics, but functionally-defined cell types are often delineated by the functional properties of the neurons - eg their responses to stimuli. (Or at least often enough that I think the language used in the paper should be clarified.) The self-interaction term based method of classifying feels more like a proxy to classifying cells based on their electrophysiological properties (e.g. fast-spiking interneurons versus excitatory neurons). I think this way of classifying is fine and interesting, but the language used by the authors, and how they relate it to other literature, feels a bit confusing to me. """,2,1 neuroai19_13_3,"""This kind of model is interesting to classify cells based on their dynamical properties - instead of directly clustering their activity. Overall the method seems sound and this is an interesting application, but I wish there was some more ground truth analysis on real data. I couldn't find much information about the dataset - what is the rough number of different cell types to be expected? What are other properties of the data? It would be good to compare the results with a simpler method. The method seems detailed and convincing - iterate over GLM and GMM. Good description of the generative model as well. One important comment - it seems like the dataset provided in the link does have 'morphology' and other information about the cell being recorded from. It seems like the authors would definitely want to use this information for a sign that the clusters that they found indeed have something to do with the biology. Well motivated. Model was well described. The validation needs work, as the clusters have no meaning yet- but this can be performed using the same dataset - I would urge the authors to do so. Figure 1c was unclear. Figure 1b also is a little unclear - are the self interaction filter cluster centers very close to each other? Please zoom in for the last part of the graph to make out differences in the centers. Develop GLM to include GMM. Use in a model of a neuron. Although validation likelihoods are shown, there should be more validation that the clustering correlates with some other properties of the neurons, ex. some aspect of the morphology.""",4,1 neuroai19_14_1,"""While spike sorting is probably not the major bottleneck to our effective use & interpretation of large multi-electrode array neural recordings, there is definitely room for improvement on current methods - in performance, versatility & comp. efficiency. Hence the work can be considered marginally important. Description of the algorithm is rigorous enough for this setting, as is comparison to other models. Perhaps would be nice to see comparisons to YASS (if possible) since this seems to outperform Kilosort? Similarly since the model is by definition trained on synthetic data, the richness & 'accuracy' of this generative model is key. Hence my label 'preliminary'. The text is well written & clear, making the shortfalls of previous methods explicit & being clear about which of these the proposed method aims to address. Perhaps the data pre-processing section could be a little clearer. A little ambiguous is ""We partitioned the recording data such that the data for each channel only contains spikes centered at that channel"" but then to also say ""For each spike, the extracted waveform is... from the center channel and its 6 immediate neighbor channels."" This could perhaps be better phrased? This is clearly a description of trying to optimise machine learning approaches for a well known, 'hard' problem in neuroscience. Hardly see any 'AI' relation. Paper is well written and fairly easy to read. The efficiency of amortised methods seems clear for increasingly large data sets, but perhaps a point of concern is how confident we can be about doing supervised learning for an inherently 'unsupervised' problem?! Training on synthetically labelled data to do inference on true data? """,3,1 neuroai19_14_2,"""Efficient and accurate spike sorting algorithm for multi-channel extracellular recordings is necessary, especially in the low SNR (small spikes) domain. Used a previously developed neural network structure to handle neural data. The network structure and logic is very clear. However, it is a bit unclear how well this algorithm perform to reduce the uncertainty of small amplitude spikes compared to other algorithms. It is also a bit unclear whether there is over-splitting of clusters in this algorithm. The text is well written. Used AI technique to solve spike sorting problem. """,4,1 neuroai19_15_1,"""Premise is that feedback alignment networks are also more robust to adversarial attacks. The authors show because the ""gradient"" in the feedback pathway is a rough approximation, it is hard to use this gradient to train an adversarial attack. The basic premise is very strange. Adversarial attacks are artificial: attacker has access to gradient of the loss function. For FA networks, it's unclear why an attacker could not access true gradient, and be forced to use the approximate gradient. Overall the technical aspects of this paper seem sound. No trouble understanding the material or writing By focusing on the more biologically plausible ""feedback alignment"" networks, the paper does sit at the intersection of neuro and AI. However at present, adversarial attacks likely have much larger relevance to AI than neuro. The premise of the work must be clarified. As well as whether or how adversarial attacks (as framed) might have relevance to neuroscience.""",3,1 neuroai19_15_2,"""This work might open up a new class of neural network model learning framework that could go beyond simply solving adversarial attacks. It is hard to judge the rigor with such little information. Overall, it seems pretty well managed. The document has been well written. The work is inspired by a critical difference in feedback connection as applicable to the brain and the models. The author are putting forward a very interesting proposition and it is worth discussing further. """,4,1 neuroai19_15_3,"""New strategies for learning that use more biologically-plausible learning rules are of extreme importance for the field. Results appear to be sound. Well written. neural-inspired learning Overall comments: Well written paper that explores an interesting idea. The material presented is novel and relevant to the workshop. Experiments conducted do a good job of supporting the authors' claims. Several small typos: Line 7 but is still instead of but still and small perturbations of magnitude instead of small perturbation magnitude Line 34 Interchange the order of fa and bp Line 52 Kurakin et al Line 53 Replace change with changes Replace BMI with BIM wherever appropriate.""",4,1 neuroai19_16_1,"""This paper aims to differentiate Granger causality and stimulus related information flow. The problem is ill-posed, making it either impossible to solve or non-existing. The latter case would correspond to a difference in definition (reduction in surprise vs reduction in variance). The former case would correspond to saying that Granger causality does not measure ""true"" information or ""true"" causality. And this is a tautology. Regardless of this, the implementation is wrong, see next section. For gaussian variables Granger causality and Transfer entropy are equivalent, see Barnett, L., Barrett, A. B., & Seth, A. K. (2009). Granger Causality and Transfer Entropy Are Equivalent for Gaussian Variables. Physical Review Letters, 103(23). doi:10.1103/physrevlett.103.238701, so the result of applying both algorithms to the case study presented here would be the same. The results don't look the same because the authors compare two different things, in figure 1b is GC vs noise, in figure 1c is the conditioned mutual information vs time step. Also, both results are correct. By construction there is a bidirectional influence (the feedback). The words can be read, but the paper is difficult to understand until one realizes that it's about solving a non existing problem, and doing it wrong on top of it. There is no link with neuro nor with AI See above. The margins for improvement are very limited in this case.""",1,0 neuroai19_16_2,"""Granger causality has been used in the fMRI literature for many years, and a deeper understanding of it's strengths and weaknesses, as well as complementary methods, will be increasingly important to neuroscience as other technologies for acquiring brain-wide activity come online. The authors compare and contrast several well-known techniques, but the problem as stated was not entirely understandable to me. The authors did a reasonable job of explaining the problem and their approach given the limited space. I would like to hear more about what the authors think GCI is actually capturing, and how this is different than mutual information (in a context that is more general than the example given). A small comment: the axis/figure text in Figure 1 is way too small to read. Neither Granger causality nor mutual information are what I would consider AI techniques. Though both can be used to understand deep networks and neuroscientific data, neither of those applications are presented here. As previously mentioned it will be increasingly important to develop these types of tools for both AI and neuroscience, but I feel the work is not at that point yet. The Schalkwijk and Kailath counterexample is a great one to build intuition, but application of these ideas to, say, and RNN trained on a neuroscience task would be a good addition to understand the differences between GCI and mutual information in a more relevant setting.""",2,0 neuroai19_16_3,"""Studying the causal influence of external stimuli and measuring the propagation of resulting information respresentations throughout neuobiological and artficial neural network models is a very important research topic, and a publication exploring that space of questions would be important if the systems studied therein were models relevant to the two use cases listed above; however, the results from this experiment do not add much of an understanding beyond what is already known in S-K systems. The technical content of this publication is sound, but the context is not terribly relevant. As other reviewers have pointed out, it is known that transfer entropy and granger causality are equivalent measurements of gaussian signals in autoregressive systems, a set of models to which the Schalkwijk - Kailath error correction models belong. GC or TE computed between any given pair of signals in the system will be equal, so this study is not able to address the differences between the two measurements. All signals present are either explicitly iid (in time) sampled from respective gaussian distributions or are the difference between gaussians, which implicitly defines said difference signals as gaussian themselves. Within-metric differences in GC and information-theoretic measurements are presented in figure 1(b) and 1(c); these show the evolution of information flow across time, but do nothing to highlight the presumed differences between GC and TE within the system presented and simulated. The results appear sound and consistent with this reviewer's general understanding of asymptotically correct systems, but they do not support the thesis presented by the paper, which itself appears to be a misunderstanding of the metrics considered. The body of text is itself readable, but the overall objective is confusing given the faults in the publication's premise. The figure is also hard to interpret -- the axes labels and figure legends are too small to be read clearly from a letter-/A4-sized representation of the paper. While the model presented and studied here may be a valid model of some neural structures in its graph form, the model itself is too simplistic for this reviewer to consider it an intersection of AI and neuroscience. Granger Causality and Transfer entropy are widely used in both AI and neuroscience and their interpretations require further scrutiny and research, but I would not say that the work presented here is highly indicative of the implied intersection of the two fields given its relative triviality. This is interesting in its own right, but not moreso than a standard exercise in understanding TE/GC evolution on an error-correcting code system. It doesn't fit the bill for this workshop.""",2,0 neuroai19_17_1,"""The paper focuses on the topic of learning non-parametric invariances using randomly wired networks. A network architecture is proposed that extends previous approaches and improves performance on an MNIST dataset with various transformations applied to it. The results are rather preliminary and their importance is difficult to assess due to the poor presentation of the paper. The results appear to provide an improvement over the previous NPTN work. However, because only one benchmark is used, it is difficult to assess the generality of the results. The paper is written in a dense and difficult to follow style. Part of this is due to the heavy reliance on previous NPTN literature. But it is also due to use of jargon and poorly defined parameters. Examples include G (if it is a number what is |G| needed for?) and CMP. The authors should strive to provide an intuitive description of their results. The diagram in figure 1 does not do a good job of describing the architecture. No general discussion of the results is provided. The connection to neuroscience is quite loose, as the authors acknowledge. The authors speculate that local random connectivity is present in the brain, but beyond that little discussion of the biological relevance of the results is made.""",2,0 neuroai19_17_2,"""The learning of invariances is a key problem in both machine and biological intelligence, and any progress made to understand it is of high importance. While the less-than-perfect clarity of the work makes it a little harder to ascertain the authors' success at making progress on this problem, it seems to me as though it is a solid step in the right direction. I might rate this as a 4.5 if I could. When it comes to building invariances, an important issue is being able to learn with fewer training examples than an architecture that doesn't have as much invariance-building capability. Here the authors present ""errors"" of their trained models without elaborating much further. It would have been helpful if the authors had discussed the training more (such as if they train until the error no longer decreases) and maybe shown the accuracy through training. There are other details that aren't explained. For instance, an important aspect of the model are fixed random connections, but it isn't discussed if these connections are weighted or not. These missing details don't seem to be essential to me. Overall the paper suffers from messy organization, difficult-to-digest expositions about the differences of at least four closely related models, some missing details, and some lack of motivation and intuition. Some of this is understandable due to the intrinsically complex nature of the work, but it seems that with more time and polish the paper could be improved a great deal (and still fit in four pages). Below are some specific examples of clarity issues. It would be helpful if the ""random unstructured local connections"" as seen in cortex were defined more precisely. Do these connections not change as the animal learns tasks while other connections do change? Do these connections map together different ""filters"", as in the authors' proposed model? There is a bundle of small issues with the writing. For instance, the acronym PRC is defined in the abstract but not in the main text. In Table 1, the labels in the caption are missing in the table itself. The label for the PRC-NPTN networks is different in the rotation table vs the pixel translation table. The organization of the paper gets in the way of its clarity. For instance, Transformation Networks are introduced in Section 1, but it isn't until Section 2 that the underlying theory is referenced (reference [1]). As far as I can tell Transformation Networks are a direct application of this theory to deep neural networks. This connection isn't made as explicit as it could have been. The architecture could have been made clearer if Figure 1 had shown an example of a (Non-Parametric) Transformation Network layer, as well as a standard convolution layer with max pooling, to compare with the Permanent Random Connectome Non-Parametric Transformation Network. While some intuition is provided for why random connections are advantageous over the standard Non-Parametric Transformation Network layers, a more thorough discussion of this important point would have been very helpful. Why is it helpful to max pool across different filters? While the authors don't do very much to explain the connection to biology and confess that this isn't a strong motivator for them, I believe that the connection is actually fairly strong. Success in their models suggests roles for random connections in the brain. Their results suggest potential improvements to state-of-the-art performance in artificial neural networks, since convolutional layers in very deep architectures could conceivably be swapped out by the layers proposed here. As such, the results are interesting both to the neuroscientist as well as the AI researcher. I feel that putting more effort into making the connection to biology could easily increase this score by a point. The exposition needs to be cleaned up. Figure 1 in particular needs to be expanded to include more models and more details. The authors should consider keeping only one of the organization trees in Figure 1 since the two feel redundant, or find a way to combine them. The buildup from the theory, to Transformation Networks, then to Non-Parametric Transformation Networks, then to Permanent Random Connectome Non-Parametric Transformation Networks, and finally to comparisons with convolution neural networks, should probably have happened in a more streamlined, linear way. Regarding the biological motivation for the fixed connections, this point could be strengthened somewhat by describing how the max pooling could be implemented by biology. The theory (as developed in [1]) seems to hold for averaging as well as max pooling, which may be more biologically feasible. In general I think the authors don't give themselves enough credit with the connection to biology (where the computationally beneficial aspects of random connections is already being discussed, i.e. for dimensionality expansion), and they could have laid out the connections more clearly. And finally, making the point that random fixed connections are important/useful, beyond showing simulation results, would strengthen the work considerably. The paper does a less-than-stellar job of making the case that a presentation of the work at the workshop would leave attendees with a basic understanding. That said, the paper has a great deal of potential, and could contribute significantly to both fields if clarity issues are resolved.""",4,0 neuroai19_18_1,"""The setting in which the proposed idea is tested is not fully convincing, and it does not achieve significant gains in this setting. Additionally, the general idea of using a polar transformation followed by a neural network to classify the transformed image is not novel. Overall, I do not believe this paper provides much actionable knowledge, even for those interested in the general idea. The authors have tried to design fair experiments, but some details seem a bit problematic. It seems like the authors attempted attempt to control the number of total pixels in each image representation. This is at least a good approximation to the computional cost. The authors train on ImageNet, which is a large-scale dataset well-suited to determining whether the proposed method can be used to improve image classification performance. What is more problematic is the small size of the images, the selection of the network architecture to process these images, and the poor performance of the baselines relative to previous results operating at the same image size. The authors apply DenseNet-121, which is an ImageNet network, to images of different tiny sizes (32x32, 16x16, and 44x23). This seems weird to do without adjusting the network architecture. DenseNet-121 is intended to operate on ~224x224 pixel images, and downsamples by a factor of 32 throughout the network (by a factor of 4 in the first two layers alone). A CIFAR-10 DenseNet might be more appropriate at this image size. Chrabaszcz et al. [1] show that a network trained on 32x32 pixel ImageNet can achieve 59% top-1 accuracy on the uncropped images. The best 3-crop result in this submission is 49% and the best single-crop result is 38%, so the gap is uncomfortably large. It's difficult to know how to interpret a ~1% gain from the proposed representation in the 3-crop setting given that a >50% relative gain in the 1-crop setting can be obtained simply by changing the architecture. [1] Chrabaszcz, P., Loshchilov, I., & Hutter, F. (2017). A downsampled variant of ImageNet as an alternative to the CIFAR datasets. arXiv preprint arXiv:1707.08819. In general, both the experiments and results are well-described. There were a few things that were unclear to me, but I realize that 4 pages is a rather significant space restriction. 1. Description of training (L34-L36): The authors say ""SGD"" but the initial DenseNet paper trained with SGD + momentum of 0.9. I'm also curious whether the authors take random crops from the tiny images as is common for ImageNet training or just do random flips. 2. Description of multi-resolution downsampling (L40-L43): The discussion indicates that the multi-resolution representation stacked the images side-by-side, but this was not clear from the description here. 3. It might have been useful to provide examples of input images for all input representations and not merely the polar retinal representation. The relevance of the structure of the human visual system for machine learning methods is an interesting topic at the intersection of AI and neuroscience. Since natural selection has tuned the structure of the human visual system to be particularly good at processing the visual environment that humans face, it stands to reason that machine learning systems could benefit from adopting aspects of this structure. This submission investigates the performance of ImageNet classifiers trained on tiny images generated by choosing salient image regions using the DeepGaze II model and applying several different downsampling techniques (uniform, polar-retinal, Cartesian-retinal, and multi-resolution). The authors find that Cartesian-retinal downsampling, which magnifies the central part of the patch, seems to perform marginally better than uniform downsampling in this setting. Strengths: The authors investigate several interesting downsampling techniques and describe the results clearly and accurately. Evaluation is performed on a real dataset (ImageNet) where results can be directly compared to previous results. Weaknesses: None of the proposed methods obtain substantial gains over the uniform downsampling baseline, with the best novel method (Cartesian-retinal) achieving an absolute gain of ~1% in top-1 accuracy. Given the poor performance of all networks relative to baselines that operate at the same image resolution without cropping, I am not convinced that the results would generalize to other settings (or even the same setting with a network architecture better suited to the task). The idea of polar downsampling is not novel, but previous work (e.g. [1,2]) is not cited. [1] Elliman, D. G., & Banks, R. N. (1990). Shift invariant neural net for machine vision. IEE Proceedings I (Communications, Speech and Vision), 137(3), 183-187. [2] Esteves, C., Allen-Blanchette, C., Zhou, X. & Daniilidis, K. (2018). Polar transformer networks. ICLR.""",2,0 neuroai19_18_2,"""-- Identifies an important question, of strong interest both for engineers and for visual neuroscientists. However, the results feel rather preliminary, given that there are many different ways this project could have been implemented, and it's not clear what effects each of the current implementation choices are having. -- The main oddity, as discussed by Reviewer 1, is that, if the hyperparameters of the Densenet are kept at their original values, then this implies that the network was trained on predominantly blank images, with only a small 32x32 pixel central region being the actual input image. Surely this means that the filter sizes and pooling choices within the network were not optimal for the inputs (and perhaps also that training was more difficult, as most input channels didn't contribute to the error gradient)? -- The accuracies of all networks are quite low compared to usual Imagenet performance of Densenet and similar state of the art models. Within this range of poor performance, there is little difference between the various downsampling methods explored. It would be informative to see a more systematic test of the three factors that could be influencing performance: (1) the saliency-based crop selection, (2) the eight-fold image downsampling, and (3) the three different variable-resolution strategies. -- The severe eight-fold downsampling is somewhat orthogonal to the question of whether variable resolution can be used to maximise computational resources. It seems like there's a risk that the combination of image cropping and extreme downsampling already reduced the image information to such a point that no meaningful benefits could be obtained by using variable resolution methods. It would be more compelling to see a series of tests with different degrees of downsampling, combined with the different variable resolution methods. -- Generally clearly written and well described. Figures are helpful and clearly illustrative of steps in methods. -- Addresses a strong intersectional question of interest to both fields (how might the variable resolution of mammalian visual sampling affect recognition performance, and could it have computational benefits?) -- Using an architecture with a native input size that matches the inputs might improve overall accuracies and better reveal differences between downsampling techniques. (Of course this would create a mismatch in input sizes for the Polar vs other methods, but the number of input channels would be almost identical so is perhaps not a major concern) -- It might be more informative to study the effects of the variable resolution manipulations separately from the effect of severe downsampling. i.e. consider versions of the network with 256x256 (or similar) input sizes, but using each of the three variable-resolution methods. -- In order to get an impression of how detrimental the various downsampling methods are, it would be helpful to have a baseline case of a network which is trained on the saliency-selected and cropped images, at full resolution (i.e. 256x256, or full network input size). The absolute accuracy values of all networks here are quite low, but some of that is presumably due to the cropping method removing more of the image than standard ILSVRC resizing & cropping approaches would? -- Description and discussion of results are appropriately measured, and frame the small differences in accuracies in a positive way without over-selling them. Authors identify several drawbacks of their methods and discuss them in an open fashion.""",2,0 neuroai19_18_3,"""This paper assesses several foveation techniques over a standardized data set and training procedure. The authors find that foveation does not substantially affect image recognition performance, given salience data. Though the claim of the paper is modest, the methods are explicit and systematic. One issue is that while the techniques all yield similar performance, that performance is not very good. The paper is clearly written, and figures are easy to interpret. The paper is mildly motivated by the observation that biological vision exhibits foveation. However, the applications are most relevant to engineered systems, and it is unclear how general the insights will be given the low performance of all of the models. The authors could perhaps contextualize this work more clearly in either a biological or an engineering motivation: how does their study inform existing theories of foveation, or advance techniques in image recognition? Bandwidth may be an important constraint in future engineered systems, so the finding that foveated downsampling is not a substantial hindrance to image recognition is potentially quite useful.""",2,0 neuroai19_19_1,"""The largest motivation seems to be the biological implausibility of backpropagation. However, many studies have shown that all aspects of backprop can be, and most likely are, realized in biological networks (error calculation, weight transport, etc. - e.g., Lillicrap & Santoro, 2019; Akrout et al. 2019). Therefore, this motivation is not enough alone. While the technical proofs are useful, the performance on MNIST is not particularly convincing. Additionally, the results seem quite noisy and would benefit from many repeated runs. What would be more interesting is to address how learning could take place through a combination of evolution- and life-time mechanisms, as opposed to a purely evolutionary-time mechanism. The inclusion of learning based on dopaminergic plasticity seems quite arbitrary. Additionally, the authors cite one biological paper, but methods like this have been used for decades in one way or another (e.g. pseudo-url) The approach is certainly introduced in interesting way and the methods are reasonably easy to follow. To what extent human abilities are represented at the genomic level vs. learned within a lifetime is certainly an interesting biological question, but its applicability to machine learning has yet to be shown convincingly.""",2,1 neuroai19_19_2,"""I don't believe this paper makes a compelling enough argument to cause readers to rethink what is learned in evolutionary rather than developmental time. I believe the algorithm functions as proposed. I suspect it is not a reasonable model for learning genetic influence on synapses. I understood the core idea, but felt that the idea itself had not been carefully thought through. This was mainly a proposal for evolutionary learning of biological network weights. The idea was tested using an artificial neural network, but was not otherwise strongly connected to machine learning. I did not find the proposal that individual synapses are learned via evolution to be compelling. This claim is seemingly contradicted by strong experimental evidence of learning at all scales during animals lifespans, and by an observation of the number of bits in the genome vs. the number of synapses in the brain. It would require stronger evidence, and discussion of the potential barriers, for me to take this proposal more seriously.""",2,1 neuroai19_19_3,"""It is interesting to think about how evolution interacts with learning that takes place during an organisms lifetime. While there is fruitful work to be done here, the motivation made by the authors needs a little more work. For instance, what types of learning do we expect to be encoded in an animals genes as opposed to its acquired synapses? Claiming that its more biologically plausible is not good enough: there are many plausible models that do not resort to evolution over generations. The fact that in Fig 1b it takes a while for the n=2000 model to learn anything suggests there may be significant variability in the results when repeated many times. Some repeated runs of the algorithm for a given number of genes would be helpful. The addition of the dopaminergic neural nets is not well enough explained and introduced to warrant inclusion. It appears just as a random addition to the model. Its not clear how the specific dopamine-related timing result they mention is incorporated in their model. More generally, reward-modulated plasticity is very well explored, why not just use these results? A result of 83% test accuracy with a non-linear network on MNIST is not so encouraging given that linear networks can perform better. Some simpler task might be worth investigating to get a better intuition for what in this model works and what doesnt, before trying MNIST. There are many small points and omitted details that make the presented work hard to evaluate. For instance: - Is the beta at line 67 the same as 1-gamma at line 119? - Line 53: by 0/1 allele distribution you mean a deterministic distribution? - Im confused about the set S. Is it a small set of 2000 elements of MNIST, sampled uniformly from all of MNIST, or only from digits 0-4? The text seems to suggest S is just 2000 elements from MNIST, while figure 1 presents results from digits 0-4 or the full set. Citing training results of 83% on S and 79% test on full MNIST, in the text, are both referring to Fig 1b? It took a number of passes through the text to figure out what was being plotted in relation to the analysis that was done. - The model could be more explicitly defined, though I know space is limited in this submission There is room here for these type of models to benefit both theoretical neuroscience, and AI. If something shows more promise on something like MNIST it may be of benefit to AI. If some of the details about how an evolutionary algorithm interacts with within-lifetime learning then it could benefit neuro. Definitely an interesting line of work. But the results need to be presented more clearly to really evaluate the worth of this particular model. Some demonstration that it does indeed work according to the theory provided (e.g. on simpler regression problems) would be useful to get the idea off the ground. The title and results are also a combination of two ideas (the evolutionary algorithm, and the sign matched 'dopamine' inspired updates). I would focus on one idea at a time or explain how they do go together""",2,1 neuroai19_20_1,"""Calcium imaging represents an important technological step forwards in our ability to record large populations of neurons. However, the increase in spatial resolution comes with a decrease in temporal resolution. Developing new algorithms for inferring the underlying neural activity at timescales shorter than fluorescence decay dynamics is crucial to taking full advantage of this data and the scientific insights it can lead to. The proposed algorithm is a rigorous model-based approach to the problem. The exposition was mostly clear, but I found it difficult to decipher the relationship between LFADS, stacked VAEs and ladder VAEs (as I have not heard of the last two before). Is the ladder feature necessary for this model to work, or does it merely improve the results? The motivation for using the VLAE approach is that it learns disentangled hierarchical features, but again it's not clear to me exactly how that is relevant here. Is it because the latent dynamics need to be disentangled from the calcium dynamics? A few clarifying sentences in the introduction of section 2 could go a long way to clearing up these ambiguities for me. The Ladder LFADS model is a combination of several recent neural network architectures that address an existing neuroscience problem in a new way. Strengths: The Ladder LFADS model does a good job of uncovering underlying dynamics and neural firing rates in simulated data. The combined approach outperforms a two-step approach where inference of latent dynamics follows a deconvolution step. Areas for improvement: Another useful benchmark would be replacing LFADS with a simple linear dynamical system. Though this would clearly fail in the case of Lorenz attractor dynamics, it seems like a natural comparison. It will also be interesting to see how well this method works on real neural data from different brain regions. In regions like motor cortex, where dynamical systems models have been used for many years now, it seems this model will perform well. It is unclear how the model will perform, however, in sensory areas like visual cortex where activity is arguably more related to external inputs than internal dynamics. I would also be curious to know how well the model works without using the ladder component; this seems like another natural comparison that could further motivate the modeling choice.""",3,1 neuroai19_20_2,"""Calcium imaging allows the simultaneous visualization of activity from thousands of neurons. Recordings from wider fields can lead to a lower SNR. Developing unsupervised methods that can reveal underlying dynamics or otherwise denoise the data is of critical importance for neuroscience. The results in this paper look promising, but they come from (I believe) entirely synthetic data. - Has the model been applied to real traces, and is there a way to reliability evaluate the quality of the output (spike train inference, underlying dynamics)? - Line 75: How much white noise was added to the synthetic traces? How robust was the model to noise? - Line 75: When generating the data, how does adding some noise to the time-constant affect the model? - Line 69: How was the parameter for L2 regularization determined? The paper is well-composed for the most part, I have some minor comments: - Line 17: period missing in ""brain activity Pandarinath"" - Lines 34-35: ""We choose the VLAE approach ... in contrast to stacked VAEs or ladder VAEs"". Is VLAE not a ladder VAE? - Line 52: Reference for GCAMP6 time constant - Line 65: open bracket ) This paper is an example of applying AI (previously published unsupervised variational autoencoders) to the analysis of calcium imaging traces, a common neuroscience recording technique. This paper combines two previously published unsupervised VAE-based models and adapts them to infer the underlying dynamics of a synthetic calcium imaging dataset. While the results look excellent on the synthetic data, it's difficult to evaluate how great this method would be without seeing its performance on real traces.""",3,1 neuroai19_20_3,"""Estimating the dynamics of the latents while incorporating calcium dynamics is an important consideration. Estimating the calcium kernel with the dynamics is an interesting challenge. However, here the kernel is known by the authors, and it seems like they essentially stick on a known kernel at the output of LFADS. Moreover, given that the authors use relatively clean synthetic data, and the performance gains are minimal at best, it remains to be seen whether this approach has any advantages. The approach seems principled and technically sound. I commend the authors on their sincere and well-performed benchmarking. The actual results are just not much better than the stepwise deconvolution + LFADS, which is unfortunate. The authors could have added more noise to their dynamics / poisson observations, in order to clarify the regimes in which their approach may work better than the stepwise approach. The method has promise, though, and it would be interesting to see what it looks like on real data. Well written and well presented. Augmenting a machine learning model to estimate dynamics in neural data, although only synthetic at this point.""",3,1 neuroai19_21_1,"""This submission presents a meta-learning approach to discovering local updates guided by feedback. The goal is to move towards more biologically plausible learning mechanisms. This is an important topic for linking neuroscience and AI, and the approach the authors take here is interesting/promising. There was little in the way of technical details. Partly, that was a matter of space, but it was also partly a matter of choice (e.g. the description of the experiments could have been shortened to make a bit more room for math). Also, the proof only demonstrates the expressivity of the approach. One concern I would have is the question of training efficiency - is it more efficient than standard meta-learning techniques and are there ways to make it more efficient by loosening some of the constraints on feedback and plasticity rules? Regardless, it is hard to fully assess the technical rigour, but the basic concept seems sound and the experiments are reasonably convincing. The finding regarding the importance of feedback is particularly illuminating in my opinion. Not perfect, but overall very well-written. Definitely at the intersection of neuroscience and AI! Overall, a great submission. I have many questions (e.g., what does the learned feedback look like? What do the learned update rules look like? Why force local learning rules based on old findings in neuroscience? We are beginning to realize that it isn't all about Hebbian plasticity! See e.g.: pseudo-url). But, the workshop is the perfect place to ask these questions. :)""",4,1 neuroai19_21_2,"""This is nice work that addresses the credit assignment problem with a meta-learning approach. The motivation needs to be a bit clearer. Is the work trying to address the credit assignment problem in general, or just when applied to online learning tasks? Either way this is important work, with many interesting future directions. The model and implementation make sense as far as I can tell from this brief submission. The theoretical results stated are nice to have. Section 1 pitches the method as solving the credit assignment problem, citing problems with weight symmetry etc, that apply to many forms of learning. But the related work in Section 2 then goes on to talk about the efficiency of backprop for solving online learning and few-shot learning tasks. The efficiency of backprop should be mentioned in the intro if it is something this work is aiming to address. While much human learning may be more naturally cast as online learning, not all of it is. There may be much interest in how we learn from so few samples in certain settings, but we also learn some relationships/tasks in a classical associationist manner which is well modeled by 'slow' gradient-descent like learning (e.g. Rescorla Wagner). The credit assignment problem exists in these cases also. So I think the present work needs to be repitched slightly as solving credit assignment in an online/few shot learning setting. Or discuss how it can be extended to more general learning problems. The submission is pretty clear. In understanding the model, it would be useful to more explicitly define the model. For instance, how is the b at line 63 related to the activation x_i and ReLU at lines 75 and 76? There are exiting directions in both AI and neuroscience this work could be take. Seeing if these meta-learnt rules line up with previously characterized biological learning rules is particularly interesting. Define the model more explicitly. And emphasize that this only solves credit assignment for certain types of learning problems (at the moment).""",4,1 neuroai19_21_3,"""This paper presents some interesting results on meta-learning of weights in a more biologically plausible neural network. The results are fairly important, as they suggest that a proper initialization may be a key aspect of the success of biologically plausible learning rules. Overall, the authors are rigorous in their evaluation. For the final draft, the authors should include their additional analyses on the feedback weights. Much of the paper was clear in its description. One point of confusion is the distinction between gradient-based learning and gradient-based meta-learning. The authors claim that they compare with gradient-based meta-learning, however, their method also uses gradients to perform meta-learning. Clarifying these details/wording would help to clear up the confusion. The paper touches on concepts in both neuroscience and machine learning, however, the paper ultimately seems more geared toward a machine learning audience. For instance, while the authors briefly speculate about alternative ways in which meta-learning could be implemented, they do not provide an in-depth discussion on its biological plausibility. This paper presents an interesting approach to improving biologically plausible learning in deep networks. A few aspects of the paper could be clarified, e.g. the baseline methods. Diagrams would also be helpful in clarifying the feedforward vs. feedback mechanisms. Again, I would want to see the additional analyses included in the final draft. This paper would be a useful addition to the workshop.""",4,1 neuroai19_22_1,"""The authors consider how biologically motivated synaptic eligibility traces can be used for backpropagation-like learning, in particular by approximating local gradient computations in recurrent neural networks. This sheds new light on how artificial network algorithms might be implementable by the brain. Space is of course limited, but the mathematics presented seem to pass all sanity checks and gives sufficiently rigor to the authors' approach. It would have been nice to present a figure showing how e-prop yields eligibility traces resembling STDP, as this is one of the key connections of this work to biology. Given its technical details it was reasonably straightforward to follow. The authors directly tried to associate biological learning rules with deep network learning rules in AI. Gives important new results about how eligibility traces can be used to approximate gradients when adequately combined with a learning signal. While eligibility traces have received some attention in neuroscience their relevance to learning has not been thoroughly explored, so this paper makes a welcome contribution that fits well within the workshop goals. One part that would have been nice to clarify is the relative role of random feedback vs eligibility traces in successful network performance. It also would have been nice to comment on the relationship of this work to unsupervised (e.g. Hebbian-based) learning rules. A final addition that would have made this work more compelling would have been to more thoroughly explore e-prop for computations that unfold on timescales beyond those built-in to the neurons (e.g. membrane or adaptation timescales) and which instead rely on reverberating network activity.""",4,1 neuroai19_22_2,"""Understanding how synaptic plasticity allows recurrent neural circuits to produce functional patterns of activity is a critical question in neuroscience. This paper directly addresses this question by deriving a synaptic plasticity rule that does exactly this, as well as contextualizing it within a number of related experimental findings. Due to the space constraints of a 4-page paper, not many mathematical details are provided for the derivation of the proposed algorithm. However, the exposition of the algorithm is clear and principled and the simulations are convincing. One piece that is missing from the results is the limitations of the e-prop algorithm relative to BPTT, given the approximations made in its derivation. Mostly well-written and clear. This paper derives a biologically plausible plasticity rule approximating the backpropagation-through-time (BPTT) algorithm from the artificial intelligence literature, explicitly linking artifical intelligence learning algorithms to biological ones. The work presented in this paper is highly relevant to this workshop and a valuable contribution to the field of synaptic plasticity and learning in recurrent networks. There is little question in the mind of this reviewer that this paper merits a high score. That said, in the opinion of this reviewer two important pieces are missing in this paper. Firstly, the discussion of how the proposed algorithm relates to previous proposals is very limited. In particular, making the explicit connection to real-time recurrent learning (RTRL) is warranted, as these two algorithms are very similar in spirit. Additionally, it seems that e-prop is very similar to the particular RTRL approximation proposed in reference 8. This link also merits further discussion. Secondly, an interesting question is how the approximations made in e-prop affect its performance. For example, asymptotic performance seems not to be affected (figure D), but learning speed is (figure C). Why is this? And are these limitations inherent to any biologically plausible (i.e. local) approximation to BPTT? These may have reasonably been omitted due to space constraints, but it would be ideal if they were explored and discussed in the future presentation of this work.""",4,1 neuroai19_22_3,"""This work addresses how temporal credit assignment can be solved in spiking recurrent networks. Based on approximate gradients of a loss in recurrent spiking networks with threshold adaptation, a biologically plausible local learning rule is derived that involves an eligibility trace, pre- and postsynaptic activity. The results seem unparalleled both in terms of performance and biological plausibility and open a promising avenue to implement (reinforcement) learning in spiking neural networks. The derivation of a local and biologically plausible learning rule is only partially understandable (because of the limitations of the 4-page format), but the general concept is clear. It is, however not clear how different simplifying assumption in the approximation of the gradients are justified and why they have only a minor effect on the final performance. Moreover, the robustness of the results with respect to details of the parameters is not apparent. Is the excellent performance only observed in a small parameter regime that requires fine-tuning, or is it a general feature? When does it break down and why? Is the assumption of a fully connected network crucial, or would this also work on sparse networks? The problem is stated clearly, the methods are explained well. Because of the limited space, the derivation is only conceptually understandable not in every mathematical step, but the reviewer can't blame the authors for that. The results are clear and understandable. The problem of credit assignment in recurrent networks is relevant both for machine learning and for neuroscience. While superficially, this works seems mostly to be a biologically plausible implementation of gradient-based learning in recurrent spiking networks, it might also provide inspiration for the machine learning community to think beyond discrete-time firing RNN. Currently spiking networks are barely used in machine learning despite their advantages (e.g. lower energy need), because it seems difficult to train them to do something useful, hopefully, this paper might be a step towards changing this. This work is very suitable for the workshop and seems relevant both to machine learning and neuroscience. Nevertheless, here are a couple of ideas for improvement: * Relating this work to previous attempts to train spiking neurons would be important. (e.g. D. Thalmeier, M. Uhlmann, B. Kappen, and R.-M. Memmesheimer 2016, DePasquale, B., Churchland, M.M. & Abbott, L.F. 2016, R. Guetig 2016, Kim, Chow 2018, A. Ingrosso, L.F. Abbott 2018) * How does the network capacity scale with network size? * Is the low irregularity (coefficient of variation of inter-spike intervals after training seems very small) a feature of the learning algorithm? If yes, how can this get more realistic irregular? * What is the dynamic state of networks after training? Is there a cancellation of external inputs by net inhibitory recurrent interaction, like in a balanced state? How do pairwise correlations change during training and are they biologically plausible? * Are spikes in this framework necessary for computation, or are they just a biologically plausible feature that doesn't harm too much? If spikes are not required, could this be mapped to a rate-based analogous network e.g. with BCM-like plasticity, where analytical results might be easier to achieve? * What are the core mechanisms of this learning algorithm and how could they be understood in more detail? * How could this be used to implement reinforcement learning, regression, classification, time-series prediction? * (How) Can E-prop be characterized analytically in a simplified form/on a toy problem? * Which experimentally testable predictions arising from this work?""",4,1 neuroai19_23_1,"""Iterative processing is an important tool in the brain and undoubtedly useful for AI. This paper proposes a model for performing such iterative processing on vision tasks and importantly demonstrates how training this process on ""clean"" data can automatically transfer to better performance on noisy data The robustness to noise in the CNN-F compared to a standard CNN is quite strong and impressive for a network never trained on these particular types of noise. A demonstration on a more challenging dataset that had more within category variation would be particularly impressive. Some of the technical details took a few tries to understand but overall quite clearly written The model takes as inspiration the general idea that feedback processing is useful in the visual system, however the type of feedback used here has only abstract resemblance to that in the brain. It is not the case the separate systems in the brain reconstruct a low level image to be passed into the very earliest stage of processing (at the expense of the true sensory input). Biologically-inspired feedback would have a more modulatory effect throughout the system """,4,1 neuroai19_23_2,"""This seems like a reasonable extension of a previous model to build a generative model on top of a CNN. The future work points to particularly interesting directions that would make this work important (e.g. ""We also plan to measure the similarity between the latent representations of the CNN-F with neural activity recorded from the brain in order to access whether CNN-F is a good 128 model for human vision"") The model seems like a reasonable extension of CNN-F_0. Where are the comparisons on reconstruction between CNN_F and CNN-F_0 in image restoration? What is the test accuracy of CNN-F_0 on MNIST? Seems like these are important comparisons. The paper is quite clear. More connections should be made between the idea of recurrent feedback loops and the type of feedback considered here. How does this work relate to models in predictive coding (Dora et al 2018), and how does it relate to ideas that the brain implements belief propagation (e.g. the work of Pitkow)? Seems like good results. Although the difference in performance between this work and the model it builds on (CNN-F_0) needs to be more clear.""",3,1 neuroai19_23_3,"""The paper presents a novel approach to iterative inference by combining bottom-up inference with top-down rendering in a CNN. The function of recurrence in the visual cortex is an important open research topic, as temporal activity in cortex is not fully captured yet. Further, robustness to image degradations is a common failure case of CNNs which the method presented in this paper addresses by iteratively refining estimates. The proposed CNN-F model is trained on clean MNIST and then evaluated on degraded test images. CNN-F is compared with a vanilla CNN baseline which makes for a convincing first analysis. However, the proposed CNN-F model is only compared with the vanilla CNN -- comparisons to alternative models for iterative inference / degraded images are lacking. For instance, CNN-F could be compared with a Hopfield model approach to occlusion (Tang et al. 2018, pseudo-url) or a CNN with lateral recurrence for untangling (Spoerer at al. 2017, pseudo-url), or an inverse graphics approach (Wu et al. 2016, pseudo-url). It is thus unclear whether feedback across the whole network is really necessary to solve this task or whether single-layer lateral recurrence or even just different training objectives might also be sufficient. In other words, only a plain CNN is falsified by this study which was already known before. The paper is overall well-written, the algorithm is well-explained, and the experimental studies are clear. There are a few sources of confusion for me: 1. switching between CNN-F and CNN-F_k. How about calling it ""CNN-F (0 iterations)"" and ""CNN-F (10 iterations)"", and then state that ""CNN-F"" defaults to ""CNN-F (10 iterations)""? The ""_k"" suffix makes it seem like an entirely different architecture to me. 2. In Algorithm 1, it is unclear what the structured prior is or how it was chosen. (only defined in the text) 3. In Algorithm 1, the operator T is not defined. (explained in the text, but missing in the Algorithm) Minor: line 34: the c of ""convolutional"" is missing line 82: s of ""In other words"" is missing The proposed algorithm is inspired by humans being able to do the task and the general notion of recurrence/feedback in the brain. However, the model is not quantitatively compared to e.g. humans performing the task (do model and humans make the same mistakes), or to neural recordings. Since the model is only trained on MNIST, I think it is also very unlikely that the model activations will correspond to neural activity -- usually these things only start to work out when the models are scaled up to ImageNet level. The overall approach of combining bottom-up inference with top-down rendering I think has merit and, in terms of ideas, connects to several areas of research in cognitive and neuroscience. Training the model on clean images only and testing on degraded images is a convincing analysis . However, without comparisons to alternative approaches to the same problem, this paper is difficult to evaluate with respect to the existing body of literature. For the purposes of this workshop, the paper also lacks comparisons to human performance and/or neural recordings. For instance, I would like to know where humans stand on the results in Table 1 in order to tell whether this model is any more or less brain-like. To seriously test this idea on neural recordings, I am fairly certain the model first needs to be scaled up from MNIST to ImageNet levels before being a viable neural candidate.""",3,1 neuroai19_24_1,"""An interesting and relevant study for the workshop. Offers important insights from vision neuroscience that can have specific and concrete impact for DL approaches. no issue to raise. Please discuss the relationship of the unsupervised learning rules to the neocognitron. See importance I would be interested to see how the learning rules work when stacked.""",4,1 neuroai19_24_2,"""This paper presents experiments to evaluate the performance of a biologically plausible unsupervised learning algorithm (presented in [1]), a topic of interest to the audience of this workshop. However, the work as presented here is limited. The algorithm is not explained here, making the work difficult to understand. The experiments, while suggestive, are limited and have confounds that make them difficult to interpret. Additionally, the paper appears to overstate its results. The evaluation is limited: (i) The paper claims to show that Hebbian learning is competitive with backpropagation. But it does not evaluate any deep neural networks, where this claim is typically applied. (ii) The paper presents learned convolutional filters to demonstrate the strength of their algorithm. These filters are evocative, but it's unclear what they imply about the function learned by the network. The presence of filters like these is at best a sanity check, not a demonstration that the network is competitive with backpropagation. (iii) Many other learning methods that do not use backpropagation have been demonstrated to learn convolutional filters. I'm mostly aware of results on grayscale images (for methods including independent component analysis and sparse coding), but this likely holds on RGB images as well. This isn't discussed in the paper and no comparisons to other methods that don't use backprop are given. (iv) The results shown in Figure 3 suggest that using ""patch normalization"" (i.e. scaling the output of a layer to have norm 1) or using an additional power nonlinearity give robustness to contrast changes in a patch at test time. This is an interesting result, but it is not central to the paper's claim. From the results, it's not clear if this is primarily due to the normalization applied to patches or to the nonstandard nonlinearity used. No ablations are presented. It's unclear how general this apparent robustness is: does it hold for other types of image distortions, or just the particular shadowing presented? (v) Arguably the main result of the paper, presented in figure 2, shows that a network trained with their algorithm performs similarly to backpropagation. However, the network trained with backpropagation uses a simpler nonlinearity (ReLU) and no patch normalization. For this comparison to be fair, the two networks should be trained with the same architecture. It is likely that the nonlinearity used for the Hebbian model (but not the backprop model) is better suited for this task, given the results shown in Figure 3 (discussed above). As such, it is not clear that the Hebbian learning algorithm is the source of the source of the model's performance in Figure 2. (vi) The paper reports test errors on CIFAR-10 of ~22%. These are not competitive numbers on CIFAR-10 (errors <10% are standard). This is not surprising given that only single-hidden-layer networks are presented here, but this result should be contextualized. The paper reads well. But unfortunately it (i) includes no details of the algorithm used and (ii) gives no intuition for why this algorithm should be expected to be competitive with backpropagation, and (iii) overstates the strength of its results. Regarding the third point: the paper claims to present evidence that local Hebbian learning is competitive with backpropagation. The evidence they present for this is that the algorithm in [1] can learn convolutional filters and that it can outperform a simpler, shallow network trained with backprop on CIFAR-10. This evidence misses the point of claims of the benefits of backpropagation, which are most clear in the context of deep networks (not networks with single hidden layers) and on real-world or large-scale tasks. This work presents additional results for an unsupervised learning algorithm that could plausibly be implemented by a biological neural system. As such, the topic it addresses is of interest to both AI and neuroscience communities. This paper presents several experiments with an algorithm that is likely of interest to attendees of the workshop, but that has been published elsewhere. The experimental results presented in this paper are interesting, but very limited and hard to evaluate. The paper makes relatively strong claims it does not support empirically.""",2,1 neuroai19_24_3,"""This work could be important. Two issues: the work is billed as a general advance, but the approach is designed for and tested on a very specific problem. Second, their are not enough details presented to evaluate the specifics of the algorithm and differentiate the work from previous work. The work may be quite technically sound. Not enough details are presented to evaluate. The writing is fine. Again, lack of details affects the overall clarity, however. Highly relevant to neuro/ai More details are needed, more clear differentiation from previous work, and more explanation of how this is a general approach to the grand problems discussed in the intro, or if it has limited applicability to a specific problem.""",3,1 neuroai19_25_1,"""The submissions has a number of important contributions: 1) suggesting a list of criteria for evaluating biologically plausible learning algorithms, 2) comparing the biological plausibility of recently proposed real time recurrent learning algorithms, and 3) proposing and evaluating a method for approximating the network Jacobian online. The technical rigor is superb. Mathematical terms are all properly defined, algorithms are defined in these terms, and the new approximation method is empirically evaluated on some simple tasks. Overall, the submission is very clear. For the final submission, the authors could improve the clarity even further by elaborating on their findings/set-up and including diagrams of learning algorithms/techniques. This submission includes aspects of both neuroscience and machine learning. The findings may be more relevant to a neuroscience audience, but members from both fields will find the work interesting and insightful. Additional diagrams and perhaps a short summary of each learning algorithm would help for the final submission. The authors could also discuss/speculate how these ideas might map on to specific circuits in cortex/hippocampus/etc. Overall, great work!""",4,1 neuroai19_25_2,"""The current field of biologically plausible learning rules is littered with many proposals and few unifying frameworks. This paper addresses that nicely by breaking down the specific elements required for temporal credit assignment and assessing the biological plausibility of each one. This is an immensely important change in tact that the field needs more of! Overall, the paper is technically excellent. There are some lingering questions I have about potential means of implementing Jacobians biologically, and a few other minor things, but overall the framework is very clear and the arguments well founded. The demonstration of the learning capabilities of the modified DNI rule is great as well. The paper is very well written, but, possibly due to space constraints, it was a bit hard to follow all the various algorithms discussed. On that note: it would be better to cite the original papers in table 1, so readers can look them up and compare without having to check back through the text. It is right at the intersection of ML and comp neuro. Fantastic submission, perfect for the workshop. I look forward to seeing it presented!""",4,1 neuroai19_25_3,"""To understand the potential for various learning rules in artificial neural networks in terms of biological plausibility the authors enumerate specific criteria to systematically evaluate bioplausibility in several state-of-the-art learning algorithms. While mostly a principled survey of existing algorithms rather than new research results, this is nonetheless an important step forward in clarifying the relationships between artificial intelligent systems and the brain. Although space-limited, the authors did a nice job on emphasizing the key computational features of the learning context and specific algorithms they explored, without glossing over mathematical details. The specific enumeration of bioplausibility criteria, while written in words, also nicely provided a principled mathematical basis for their analyses. While it was clear in Table 1 which algorithms required e.g. the network Jacobian or matrix products, its presentation could have probably been simplified quite a bit. Given the large number of different mathematical ideas they needed to convey, however, the paper was generally quite straightforward to read. This paper directly evaluates several AI learning algorithms in terms of their biological plausibility. The authors provide a very nice, principled survey of several AI algorithms in terms of biological plausibility, focusing specifically on biologically plausible ways to implement operations involving the network Jacobian. While the authors didn't strongly suggest any novel algorithms as a result (besides DNI(b) ), this is nonetheless a useful first step toward establishing a common framework for developing new approaches in both neuroscience and AI. One thing I think would have been useful to mention, even if rigorous analysis was beyond the scope of the manuscript, would be unsupervised and reinforcement learning algorithms, in which errors are not necessarily defined by moment-to-moment differences between generated and target time-series, but rather in terms of sporadic rewards and punishment, and which may have a deeper intrinsic link to biological learning rules.""",4,1 neuroai19_26_1,"""Meta-learning is surely crucial to how the brain works, and it is very interesting to investigate local learning rules via meta-learning. However, this paper does not approach the issue from a sensible stand point, and is very confused about the application of meta-learning (more on this below), so it does not make a very important contribution in my opinion. The technical approach in this paper is problematic. A few major issues: 1. The claim that local signals in the brain do not carry global loss function information is pure speculation, and not well founded. For example, equilibrium propagation (pseudo-url) uses local learning rules and does follow a global loss gradient. Furthermore, gradient descent != supervised learning. So, right off the bat some of the motivations for the paper are unjustified. 2. Generally, in meta-learning, there are two learning loops, an inner loop where the learner's parameters are updated, and an outer loop, where the meta-learner's parameters are updated. Importantly, meta-learning typically involves multiple different task variants in the outer loop, because the point is that the system must learn how to learn across tasks. However, in this paper, there is no outer loop where multiple tasks are used to learn how to learn. As such, the meta-learner does not learn how to learn broadly. Rather, the meta-learner is only tasked with figuring out weight updates for a specific task, and with the constraint of a local update rule. This is not so much meta-learning as optimisation of a local learning rule. This would be why, I suspect, no data efficiency improvements occur. The entire approach is muddled. 3. The performance is not very impressive. It is possible to achieve much better results on fashion-MNIST than the authors report using backprop. Given that no data efficiency is achieved either, I don't actually see any real technical contribution here. The clarity is very poor. Example: the authors refer to parameters phi_L and phi_M in the text, but these are not mentioned in the algorithm. The paper is littered with such stray concepts, etc. Moreover, the basic premises are poorly stated in my opinion. The actual use of neuroscience here is limited, and is almost wholly born out of a misunderstanding about the nature of learning in the brain, e.g. that the learning rules are local and could not possibly follow cost function gradients. Note, for example, that there is growing evidence of non-Hebbian plasticity in the brain, see e.g.: pseudo-url Moreover, the impact on AI would be limited, as this paper does not provide any advances in the field of meta-learning for ML. This submission has some neat ideas in its kernel, but the authors are confused about both learning in the brain and meta-learning more broadly. For examples of more clear-headed papers that are thinking in a similar direction of this submission, see: pseudo-url pseudo-url""",2,1 neuroai19_26_2,"""The motivation behind the work is important, however, the methods and results are not yet convincing and need substantially more intuition and analyses to convince that its results are significant. The neuroscience motivation is based on some false assumptions. While the brain may not be trained exactly like backprop, its still an open debate about how much global error signal plays a role in learning, especially given the rich literature in predictive coding and error feedback for visuomotor tasks. In general, prediction plays a key role in learning, and this includes predicting distributions of the world. Some of the results require more experiments to be convincing. The experiment carried out is for one trial, itd be good to get a sense of reproducibility. The difference is fairly indistinguishable in figure 1, right panel. Also, control not described in sufficient detail. While the motivation and connections are fleshed out, the methods are confusing details are missing. The result section also is difficult to interpret in the context of the motivation. In general, more intuition is needed to justify these choices. In addition, the results needs more experiments to probe at a potential explanation. For instance, is it possible that the meta-learner is acting as a normalizer, keeping the neural activities in each layer within some bounds? Strong. Tackles the question of integrating biologically plausible learning rules along backprop to train neural networks. Some more analyses would be helpful and greater clarity in discussing the models, mainly the meta-learner.""",2,1 neuroai19_26_3,"""I believe the approach of meta-learning biologically plausible learning rules is extremely promising. I greatly appreciated the clear discussion of the unexpected behavior of the learning rule, and counterintuitive mechanism by which it may be acting. It was difficult to understand the details of the algorithm, though I believe this is largely due to the length constraints. This is taking meta-learning techniques from machine learning, and applying them to biological learning. I believe the approach and preliminary results are promising. """,4,1 neuroai19_27_1,"""The current work presents an algorithm for neural network training using node perturbation that does not rely on weight transport and performs well on a number of difficult machine learning problems. These methods are essential for neuroscience and AI and will hopefully make solid testable predictions in the near future. The results of node perturbation for MNIST, auto-encoding MNIST, and CIFAR are convincing. Authors show the average of multiple runs and over different noise levels. Where the method has drawbacks (noise requirements, baseline loss, separate feedforward and feedback learning), the authors have clearly pointed to ways these requirements are in line with biology, or could be removed in future work. The method and benchmarks being performed are described clearly and with reference to the relevant literature. How real and artificial neural networks can learn without direct access to synaptic weight information from other neurons (weight transport) is an essential question in both neuroscience and AI. The similarity between this work and Akrout et al. (2019) is definitely large. Would be curious to hear the authors thoughts on the potential advantages / disadvantages of their method in comparison.""",4,1 neuroai19_27_2,"""Understanding how learning occurs in the brain is extremely important. Understanding how the brain could implement backprop is also important. This approach seems technically correct, but inefficient with potential scaling issues -- it seems unlikely it will change how readers think about learning in the brain. I believe all claims are correct. This was clearly written, but seemed unnecessarily complex. This work focuses on porting the idea of backprop from AI to neuro. I don't understand the need for the noise perturbations. This work proposes updating the backwards weights with (B^T e - lambda) e^T, and states that doing so will cause them to converge towards the transpose of the forward weights. Wouldn't it be simpler, and require a less complex circuit, simply to update the backwards weights with h^{i-1} e^T? (as is proposed in [Kolen and Pollack, 1994]). In this case, foward and reverse weights will also converge towards each other. It seems like doing this by injecting noise instead of just using the forward activations requires both a more complex, and noisier, circuit. Also, if every unit is simultaneously injecting noise, it's not obvious to me that this will scale better with number of units than RL -- I suspect the scaling will be exactly the same, since noise contributions from different units will interfere with each other. (should cite evolutionary strategies for your functional form for lambda) """,3,1 neuroai19_27_3,"""Learning effective backward-pass weights is an important step towards biologically plausible learning of difficult tasks in ML. The experiments and visualizations are rich and convincing. A learning-based approach to credit assignment seems to be clearly better than relying on feedback alignment. I agree with reviewer 3 that a discussion of training signal variance scaling would be helpful, and I agree with reviewer 2 that comparisons to more related approaches would be interesting. I understood the methods section up until ""we will use the noisy response to estimate gradients"", which is why I don't have a good sense of how this approach will scale (see reviewer 3's comment about simultaneous noise injection). Other than this, the paper is interesting, well written, and well organized. Learning without weight transport is of interest to members of both communities.""",4,1 neuroai19_28_1,"""Predictive coding is a current theory in systems neuroscience with a lot of potential for development by looking at deep generative models. Likewise, deep generative models inspired by thalamocortical architecture and dynamics could result in improvements to online perceptual learning. Since there are no results in this submission, I have read it like a synthesis of two divergent literatures. The initial descriptions of predictive coding and variational autoencoders are precise and succinct. However, the comparisons and contrasts is very shallow. The discussion on biologically plausible backpropagation is worthwhile, however, the connection to either predictive coding or variational autoencoders is not made. The discussion on normalizing flows is interesting and links with predictive coding are established, so I accept from this article that this could be an interesting frontier for research. While the descriptions of the concepts in this article are clear, the overall synthesis and thesis of the article are not. The article is addressing open questions relevant to both AI and neuroscience. Strengths: The ideas floated in this article have a lot of potential to launch a research topic. The structure is strong and would make for a good first draft of a research grant. Areas for improvement: The thesis needs to be more focused. What is(are) the research question(s) that you want the reader to reach by the time they finish reading? Narrowing this down and making it clear is an absolute must. In this vein, I felt the discussion of normalizing flows was particularly promising.""",2,1 neuroai19_28_2,"""Predictive coding remains of great interest in systems neuroscience - with much effort devoted to linking theory to biological function. Thalamocortical architecture has been relatively well characterized biologically, suggesting it may be a good architecture for future efforts. As the authors' goal seems to have been to present a synthesis of ideas from the field, the technical rigor may be acceptable on these grounds. The overall text was well-written and easy to follow. The figures were only somewhat helpful, but as this is a synthesis paper, added to the overall story. Backpropagation is an area of great interest for both AI and neuroscience; in this sense, this paper highlights interesting ways in which this could be a future research direction. Broadly, I wonder at statements of biology relying on local learning rules - of course it does, but the studies referenced are largely in neuronal culture dishes. Perhaps by understanding dynamics at a systems scale (in small model organisms perhaps), it may be both local and global. It is unclear based on the authors' framing if their deep network approach allows for such flexibility. The work is very clear to read and follow - overall, the presentation is strong. There are many potential areas of interest that arise from the ideas outlined here. Overall, however, the work would benefit from more discussion by the authors of why they chose these topics (i.e. which of these ideas is of particular interest, such that they think that these are an interesting new research direction). Some sort of brief outlook or summary for future consideration would be of added value at the end of the document.""",3,1 neuroai19_28_3,"""The paper provides a high-level overview of predictive coding and VAEs and speculatively connects these two methods to outstanding questions in neuroscience (the function of lateral connections and whether backpropagation occurs in the brain). The high-level overview of VAEs and predictive coding appears to be correct. However, the connections made in this paper to neuroscience (in the sections on backpropagation and normalizing flows) are largely speculative. No substantive predictions are made, and the biological details are not examined with enough granularity to draw any solid conclusions. For example, it's unclear in what sense normalizing flows may ""help justify design choices in predictive coding,"" as claimed. The exposition is fairly clear. This paper attempts to build a bridge between variational autoencoders (an important framework for generative modeling and unsupervised learning in ML) and predictive coding (a controversial, but potentially powerful explanatory framework in neuroscience). The paper provides a high-level overview of predictive coding and VAEs and speculatively connects these two methods to outstanding questions in neuroscience (for example: to the function of lateral connections and the question of whether backpropagation-like computations occur in the brain). This work is very preliminary and presents no technical results.""",1,1 neuroai19_29_1,"""The low dimensional manifolds (e.g. in frontal regions) as well as high dimensionality of representations (e.g. mixed selectivities in prefrontal regions) both have been shown to exist in the brain, and based on the learning theory, both have their own computational advantages. In this paper, the authors show that recurrent neural networks (RNN), as models of brain dynamics, can form both low- and high-dimensional representation spaces depending on their initial pre-training chaotic behaviour. The main hypothesis is that the initial chaotic behaviour of RNNs matter in their post-training representation space. Through simulations, the authors provided two toy examples where RNNs were trained to solve a binary classification problem in a high dimensional input space. The RNN at the edge of chaos (EOC) can form a compressed representation space that suits the binary classification problem. Intuitively, the EOC RNN collapsed all the input space to two attractors that were easily separable. In contrast, the strongly chaotic RNN (SC) learned to expand the input space and form a linearly separable representation of input samples. The SC RNN was shown to be quite useful when the input samples were lying on a low-dimensional manifold in the input space where the two classes were not linearly separable (Figure 3). The examples support the main hypothesis that the two initial modes of chaotic behaviour leads to two different learned representations in RNNs. However, it's not yet convincing that this model underlies the observed dimensionality (low or high) of neuronal activity in the brain. Specifically, since the two expansive and compressive behaviours depend on two different initial chaotic behaviours, and since different tasks might need expanded or compressed representation spaces (as the examples in the paper show), it's hard to imagine how the chaotic state of different brain circuits could change in a controlled way (or is this even what one could suggest?). Furthermore, no mechanistic explanation is provided by the authors for the relationship between chaotic dynamics and the dimension of learned representation space. In short, the materials in the paper support the claim that both expanded and compressed representation spaces can be formed by RNNs, though it is not shown how chaotic dynamics are computationally linked to the dimension of RNNs representations, and why this hypothesis should be taken seriously by neuroscientists. The motivation behind their study is clear. The use of RNN as a model of brain dynamics is well justified. The schematics and result figures are clear and their line of reasoning is easy to follow. The contribution of this paper, in its current format, is mainly to neuroscience than AI. This study brings a very important insight to the neuroscience community which became possible by a smart application of AI models to a neuroscience problem. Even though this study was motivated by a neuroscientific problem (low dimensional manifolds or high dimensional mixed selectivities in the brain), the findings of this paper can potentially lead to a better understanding of RNN behaviour for AI community as well. The important idea suggested in this paper is that RNNs can show both behaviours, compressive and expansive, depending on their initial chaotic state. The smart choice of toy examples in the paper helped drawing an intuitive picture of the hypothesis and the RNNs dynamics, and also supporting the main hypothesis. Since this paper attributes a functional importance to chaotic behaviour of neuronal circuitries in the brain, to clarify the importance of this theory for neuroscience, it is crucial to explain how the chaotic state of different circuitries in the brain can be potentially modulated in different tasks or contexts. Do different brain circuitries exhibit different levels of chaotic dynamics? Also, since expansion and compression of representation space are differently preferred for different tasks (as shown by the two toy examples in the paper), can a single neuronal circuitry manage to do both in different contexts (if yes, how)? These issues are not addressed in this study which could be considered as a limitation, though given the limited space of the paper, it's understandable, and can be considered for future steps. Also, the other important question concerns the role of training/learning in the empirical observations on low- and high-dimension neuronal representations in the brain. As shown in this paper, the compressive or expansive representations have been formed after training the RNNs on the task. Therefore, one could speculate that the empirical observations on the dimensionality of neuronal activity could also be the by-product of training animals on the experimental tasks. This can be considered as another reason for probing neuronal activity not only on well-trained animals but also throughout the training process. """,4,1 neuroai19_29_2,"""This paper characterizes RNN dynamics over the time course of response, suggesting that networks that are strongly chaotic and those at the edge of chaos behave differently. These results suggest that the operating regime of an RNN at initialization may modulate how it interacts with task dimensionality. This result is novel as far as I know. It is likely to be of interest to computational neuroscientists interested in RNNs and task dynamics and is potentially of interest to the AI community. The results in the paper are generally clear and experiments seem well-designed. All of the experiments are done with two networks: one initialized near the edge of chaos (EOC) and one that is initially strongly chaotic (SC). The paper doesn't explain how they obtain these two network states, so it's unclear if other properties of these two networks might be contributing to the effects observed. It's unclear how robust the small effect of dimensionality increasing then decreasing seen in figure 3d is. Is this change reliable and significant? Is it preserved on inputs with different, low values of d or in networks with different numbers of units? The main result concerns differences in behavior between RNNs in edge of chaos (EOC) and strongly chaotic (SC) regimes. These terms are not defined in the paper, nor does the paper explain how networks are sampled in these two regimes. Effective dimensionality seems like a reasonable way to measure dimensionality in this context, but it is ultimately a linear measure. Given that RNN dynamics can be highly nonlinear, it is likely to miss some nuances of network behavior. A discussion of possible limitations would help interpretation of the results. The plot legends don't show what dashed and solid lines mean, making the plots hard to read. This is only explained in the Figure 2b caption, which I had to refer back to several times while reading the paper. This paper shows general results on simple RNNs, which are likely to be of interest to members of both the AI and neuro communities. There are several areas where the results of the paper could be extended to make a stronger case to both communities. A few examples: - The difference between the two networks is present at initialization. Does this have implications for RNNs in ML, which are typically randomly initialized before training? - The results seem to suggest that SC networks may offer performance benefits over EOC networks. Is there evidence of this? - How do these results relate to the typical operating regime of networks in task-related circuits in the brain? E.g. how do the results suggesting chaotic networks can temporarily increase the dimensionality relate to results in [8]? The tasks in this paper are very simple, so these results may not directly generalize to more complicated settings.""",3,1 neuroai19_29_3,"""This paper addresses how the dimension of input representations is changing both across time-steps and during training and how different network initializations affect classification performance and generalization. The main numerical finding is that the dimension is reduced during training. However, inputs that are not linearly separable can be better classified by a network initialized 'strongly chaotic'. While the scientific question is important the findings seem preliminary and anecdotal. The results are purely numerical and only found in on one toy problem. The dimensionality estimate is based on the covariance of the network activity across states, which does only take into account structure captured by the first two moments but seems to be standard in neuroscience (Gao, Ganguli, 2015). The training of the network is done using backpropagation through time with RMSProp minimizing a cross-entropy loss for a delayed classification task. The influences of the initialization of the weights on the classification accuracy likely depends on the training algorithm/protocol/hyperparameters, but the authors do not investigate this question. Also, the link between the initially putatively strongly chaotic vs. edge-of-chaos dynamics of the RNN and the dimensionality is not clear. Isn't chaos leading to unpredictable dynamics? Why should that help in classification unless the initial state of the network is fixed and noise-free? Isn't there possibly an issue of robustness to noise in the initial conditions? The quantification of the classification performance is to my understanding done correctly (separate training and test set). It would be desirable to check the results using nonlinear dimensionality reduction techniques (e.g. t-SNE, Isomap etc.), because PCA has many known issues. Therefore, it is not clear if this a more general phenomenon or just an interesting anecdote. Moreover, the RNN is neither biologically realistic (discrete-time, no spikes, random networks) nor state of the art in machine learning (vanilla RNN classifying Gaussian point clouds, no gated units, no test of the hypothesis on standard benchmark data sets using state of the art models). While the dimension across input patterns is quantified similarly to previous work (Chung, Lee, Sompolinsky 2018) albeit, without analytics, a quantification of the dimension of the object manifolds (dimension within a class) is missing. It is not clear how robust the results are with respect to changes in the parameters, e.g. task complexity, dimension of object manifold. In conclusion, while the analysis seems to be done correctly, the results seem rather preliminary and based numerical anecdotal evidence rather than analytical or general mechanistic insights. Both the scientific problem is well-explained, also the methods are very clear and the results and their relation to previous works are very understandable. Questioning how, across time and during learning/training, the dimensionality of representations changes in recurrent networks is relevant both for the neuroscience and the machine learning community. However, concerning the relevance for neuroscience, the model (discrete-time, firing rates, random networks) seems far away from biological plausibility. It is not clear to what extent chaotic rate fluctuations of firing rate models can account for the variability observed experimentally. It is not clear either to what extent the finding that dimensionality of representations decreases throughout learning depends on the training protocol (backpropagation through time with RMSProp) and on the model class. It is not explained how this result relates to previous work linking the dimensionality of neural activity to the task complexity (Peiran Gao, Eric Trautmann, Byron Yu, Gopal Santhanam, Stephen Ryu, Krishna Shenoy, Surya Ganguli 2017). For the machine learning community, the scientific question: what we can learn about the computation from the dimensionality of representations; could potentially be interesting, but a more realistic problem where RNNs are having superior performance compared to feed-forward models would be more instructive. Why would one study the question in such a simple toy problem, without aiming for analytical results? The scientific question of this paper is fascinating and the study of generalization and classification are without flaws. However, to make the study more relevant, it would be essential to investigate if both of the two main findings (dimensionality reduction of input throughout learning, and improved classification for linearly nonseparable problems) are a general phenomenon or just a feature of the training protocol (BPTT with RMSProp), the nature of the toy problem (classification of Gaussian point clouds), and the model class (randomly initialized discrete-time vanilla RNN). To make it more relevant for the neuroscience community, this could be investigated in a biologically more plausible model/task. Also, predictions that could be tested in experiments would be desirable. To make it more relevant for machine learning, either rigorous results (e.g., upper/lower bounds on the dimensionality, capacity of the network, etc.) (c.f. Chung, Lee, Sompolinsky 2016, 2018) or applications to state of the art RNN problems (e.g., machine translation, NLP) using state of the art gated units would be desirable.""",2,1 neuroai19_30_1,"""They make modifications to an existing generative model of natural images. They do not make direct comparisons to previous models or study quantitatively the results of the model with respect to its parameters. It is difficult to judge whether the new model is important because it has not been evaluated except by eye it does seem to reconstruct an image. They show images of a single reconstruction but no quantification of reconstruction quality or comparison to previous methods. In the spirit of insight it would have been very nice to have a quantification of error with respect to parameters (priors on slow identity, fast form). If it had been evaluated and its efficacy varied in an interesting way with respect to the parameters of the model this could be a potentially important model to understand why the nervous system trades off between object identity associated features, transformation features, and speed. The statement that: GANs and VAE features are not typically interpretable. Seemed broad and was unsupported by any citations and to my knowledge GANs and VAEs have been used specifically to find interpretable features. Paper was organized, figures clear and readable. Some development of the model could have been left to the references and didn't add much to their contribution (e.g. Taylor approximation to a Lie model) . When they say steerable filter I was a little confused, do they just mean the basis vectors learned vary smoothly with respect to some affine transform parameter? Their statement of the novelty of their method: (1) allowing each feature to have its own transformation was not clear. Does this mean previous methods learned the same transformation for all features. They make an interesting connection to speed of processing that rapid changes better represented by the magnocellular pathway would be associated with transformations and slow parvo with identity. It was not clear though where they experimentally varied/tested this prior in their algorithm. So while an interesting connection they did not make clear where they substantively pursue it. They draw an analogy between the ventral and dorsal stream of cortex and bilinear models of images. The main place to improve is to have some quantitative analysis of the quality of their model perhaps MSE of image reconstruction. Then this evaluation could be used to study impacts of the parameters of their model which could then lead to neural hypotheses. They have some qualitative evaluation in images of filters but they could explore the parameter space to understand what led to these features. One of their stated novel contribution was that their filters were convolutional but they do not discuss the potential connection convolutional filters have to transformation of features which seemed like a gap. Weight sharing across shifted filters separates out feature and position yet many of their learned transformations are also translations. Is this an issue of spatial scale? This warranted some potentially interesting discussion though admittedly 4 pages isnt a lot of space.""",2,1 neuroai19_30_2,"""Interesting extension to bilinear sparse coding models, but there is insufficient evidence in the work to support the claims in the abstracts - particularly that it captures the statistics of the transformations between frames. There are no quantifications of the performance of the model particularly in comparison to the original model that they are extending. The reconstruction of a single image in Fig1e is not a convincing test of the model - one would want to see how well the feature dynamics predict the next frame, if they are indeed sufficient to capture changes in the videos frame by frame. Figure legends/descriptions are too short, not totally clear what is shown in Figure 2. Unsupervised approaches might be interesting to some in the AI community.""",2,1 neuroai19_30_3,"""This paper continues a line of work from the 2000s that has not had significant recent interest. I am glad it is getting tried with modern compute scale and tools, and I believe the results are promising. This submission is not sufficient on its own to convince me though that this approach will tell us new things about the brain or about artificial neural networks. The algorithm was presented very clearly, and I believe all claims to be correct. I was surprised that x was set by projection rather than inference, and would have liked better understanding for why this was effective or desirable (though this may not be possible w/in length constraints). The writing was very good, and the algorithm and results were very clearly presented, especially considering length constraints. The paper presented an unsupervised machine learning algorithm, which was used to try to describe representation learning in the brain. This was a sensible algorithm for unsupervised feature learning, algorithm and results were clear, and results were reasonably good. """,4,1 neuroai19_31_1,"""Neuroscience certainly has seen an explosion in studies looking at network topology and applying the tools of graph theory/network science. Given how easily artificial neural networks can be mapped onto graph structures it seems very natural to combine the two. It is also a straightforward way to bring in biological data, potentially at exactly the right level of detail/abstraction. The particular results in this paper, however, do not reflect a particularly strong instantiation of this concept When claiming that a technique results in better performance on a task, the baseline network tested is obviously very important. The baseline model is described as containing one conv block and a fully connected layer. It seems that this baseline has far fewer processing stages and parameters than the models with DAGs. And the DAG models have differing numbers of units amongst themselves. Comparing to ""frozen"" (ie untrained) DAGs does not control for the benefit of having these extra nodes as even random weights can still perform well on simple tasks. Relatedly, the use of MNIST for the main comparison metric is a poor choice because the baseline model already performs so well that marginal increases are hard to interpret here. It seems that fashion mnist is a harder task at least and should have been used to compare models. The introduction was well written however details were lacking in the methods and results. For example: ""DAG"" was never actually defined. Why was the c elegans network the only one tested on other datasets and not tested while frozen? Why does the validation accuracy start so high in Fig 2? Bringing neuroanatomical data directly into deep nets is challenging and while many assumptions and simplifications were made in order to do it in this study, it is still an admirable attempt at combining neuroscience and AI. """,3,1 neuroai19_31_2,"""The study attempts to use the wiring statistics of real brains to build neural networks. While it is an interesting approach, the choice of task and the model assumptions are not well suited to the topic. The performance improvements are also not very convincing, and it's unclear if we should expect these results to be generalizable. The choice of network connectivity is poor. The authors use undirected networks and randomly convert them to directed networks, but connectome data with directed weights are readily available in a multitude of organisms, including C. elegans and mouse. The results are not convincing, with MNIST performance at above 97% in all cases. Why are results for C. elegans not shown in Table 2? Additionally, the issue of number of trainable parameters is not explored. Freezing the weights is not sufficient -- more frozen parameters could still account for the performance benefits compared to the baseline. The issue of number of trainable parameters is not sufficiently explored. I would have like to see a better exploration of how the results depend on this quantity and how things change with different assumptions about learned and unlearned connections. It's hard to believe that the C. elegans connectome would be optimized for MNIST in any way. Also, ignored directedness in the datasets is an unnecessary omission. More work could have been done to bring the models closer to the biology.""",2,1 neuroai19_31_3,"""The connections between network topology and function in both neuroscience and AI research are very interesting. The pursuit of research at this intersection is highly important. This paper does fall into that category work, but the methods and results presented therein do not add up to an important contribution to the area. The paper goes to some length to motivate the research it presents, providing a brief survey of the development of network neuroscience that cites the connections between several prominent publications underlying that development. The technical aspects of their own work are detailed less satisfactorily. The structure of the networks is presented in citation, but not actually detailed in any measurable way. Their method of constructing the networks is described in text reasonably well, but the diagrams presented (e.g. Fig 1) are not detailed nearly enough. It is not clear how the networks differ. Metrics are presented to describe the modular structure of the borrowed network subunits, but their connections to the desired topological results are not made clear. Their results are also speciously presented. Four of the presented models start - without any prior training on the MNIST task - performed at >97% accuracy. Moreover, the results presented are explicitly labeled as validation performances. The loss patterns are also ill-detailed; increases in validation loss are not described and mesh strangely with the presented classification results. The research is well-motivated, but the actual project pursued is not. The structure of the network models adopted and used is not clearly communicated to the reader (see technical rigor section) and the figures are lacking in detail. For example - half of the line plots in figure 2 (subfigures not individually labeled) are not described in either the text, the figure legend or the figure caption. There is a clear message conveyed through this work, but it doesn't answer the questions presented in the ostensible thesis of the paper: how does network topology influence computation. They've shown that they can get high classification results on a particular sort of network architecture, but don't explore how the defining aspects of those topologies influence the results presented. The overall intent of the work is unclear for that reason Ideally, this would be highly intersectional; however, the lack of execution toward the stated intent of the paper do not follow through to actually fulfill that intersection. Understanding the role of network topology in network computation is important, but I think that the work presented here is less so.""",2,1 neuroai19_32_1,"""As recording technologies continue to increase the number of simultaneously recorded neurons across many model organisms, developing new techniques to understand how these neurons form local circuits (and hence what computations they perform) will be increasingly important. The proposed model is an extension of several other well-known techniques, and incorporates them all cleanly. The model and results are well-described in the limited space provided. This model uses several recent advances in AI (VAEs, beta-VAEs, concrete distributions) to tackle a difficult neuroscience problem. Strengths: This is an interesting application of several AI techniques to an equally interesting and increasingly pressing problem in neuroscience. The model was well-motivated and well-described, and the synthetic data results are convincing. The authors did a reasonable job of comparing to other recent methods introduced in the neuroscience literature. Areas for improvement: I have two suggestions for improving the analysis, which will be helpful in convincing neuroscientists that this is a useful tool: 1) There are other models for determining dynamic functional connectivity that might be additional useful comparisons; for examples see Foti & Fox, ""Statistical model-based approaches for functional connectivity analysis of neuroimaging data"" 2) An interpretability issue arises when the network is only partially observed. Since this is almost exclusively the case in neuroscience, it would be interesting to see how this method performs when you simulate, say, 24 neurons and then only observe 12 of them. How does this change the inferred network structure?""",4,1 neuroai19_32_2,"""Latent dynamics of neural populations reflect the computations performed by the population. Therefore, inferring the these latent dynamics from noisy multiunit recordings is of great importance in system neuroscience. Building on top of recent advances in unsupervised deep learning, this paper proposes a novel method for inferring dynamic latent connectivity between single neurons based on recorded spiking activity of single units. The theoretical concepts and the experimental results are sufficiently rigorous and convincing. Every part of the study is described clearly. Despite the limited space provided, the authors have successfully managed to explain the concepts, theory, and results with sufficient detail and have not left any ambiguity in the text. The paper has employed several most recent concepts in unsupervised deep learning to tackle an important methodological problem in neuroscience.""",4,1 neuroai19_32_3,"""Finding unsupervised methods to accurately estimate network connectivity from output time series data is paramount to the analysis of complex systems with unknown network structure (e.g. neuronal data). The model presented in this paper is a good extension of the NRI methods it's built from and is shown to be relatively accurate in an F1 metric of estimated connectivity, but the poorly detailed comparison estimation methods and very low data reconstruction accuracy are problematic. The dNRI model proposed is well-detailed in concept: a VAE model whose encoder and decoder networks map from network node signal data to estimates of network connectivity (given as a probabilities) and vice versa, respectively. The network is able to produce network structure estimates that score highly on an F1 metric relative to other methods presented. This is impressive, but the results are not thoroughly presented or adequately qualified. Some of the methods presented as viable comparisons of network estimation performance are questionable. The use of an NRI method is well-motivated, as the presented dNRI model is a direct development from a ""static"" or time-averaged NRI model. The use of GLM models from estimating spiking activity is not motivated beyond a passing citation that does not relate them to the other models presented. The use of Tensor Decompostion (low-rank canonical polyadic forms of trial-segmented multidimensional signal data) and the SeqNMF model to estimate network connectivity is interesting and partially relevant given this pair of models' ability to model dynamic changes in network activity patterns, but is not proper for technical reasons, the foremost and most objectionable of which being the authors' assumption that TDA's neuron activity dimension output and SeqNMF's canonical sequence firing output estimates can be used to estimate network connectivity at all. In the former case, a inappropriately cursory description of the network connectivity estimation method is given as taking the dyadic product of the TDA neuron/channel vector and ""convolving"" the resultant connectivity matrix with the trial and time vectors. The second part of that statement, found in a single sentence of the second paragraph of section ""3 Experiments,"" does not provide the clarity required to treat such a mathematical operation. The first part, regarding the use of the dyadic (outer) product of what are essentially signal component strength vectors to estimate network connectivity, is not appropriate. Showing that two ""neurons"" are coincidentally firing within a data segment window is not at all equivalent to the description of the network connectivity estimates produced from the dNRI model and is in its own right a questionable statement. A very top-level objection I have with this method is that it is only capable of producing square matrices, while figure 2 clearly shows that the ground truth networks tested in the model are directed graphs with non-symmetric association matrices. Furthermore, the order of the TDA method is not stated. If greater than 1, this would produce several individual network connectivity estimates. In the current publication, there is no mention of how these order vectors are combined. The second of the two methods, the SeqNMF model, decomposes an input signal of superimposed firing sequences through deconvolution into a tensor of firing sequence atoms and a time signal of impulses representing their place in the recomposed signal. The authors have extended their interpretation of these outputs to consider that firing sequences imply an underlying network structure, and have apparently sought to estimate that structure by taking some form of an outer product of individual sequence matrices. The means by which they perform this lightly described method of network reconstruction is entirely unclear. This would suffer, the review must assume, from similar issues regarding the necessarily symmetric output of dyadic products over real-valued vector data; however, the apparent break in interpretation from what the reviewer understands as the correct understanding of SeqNMF model outputs is much more damaging to the publication's quality. One further issue is that of the lacking treatment of input data reconstruction accuracy results. There is mention that ""Many of the baselines outperform dNRI at reconstructing the original spiking activity, but this is a consequence of the difference in training objectives or inference procedures,"" but the exact nature of those differences should be explored much further. While the dNRI model is not the only model presented that is reporting upwards of 100% f-norm error metrics, it should be explored more clearly Putting a single number on this measurement is overly reductive in the case of this publication. The descriptions of the dNRI model, the optimization methods used to train it, and the data that the model was trained and tested on in the scope of this paper are clear and reasonably thorough. For that reason, I'd like to rate it highly; however, the descriptions of model test results that follow the laudably written sections lack the details required for reasonable comprehension. Many of these issues are more fully detailed in the ""technical rigor"" section, though the reviewer will also allude to them here, as the omission of these important details is a detriment to the paper's ability to cogently communicate the validity of this research project the reader. For that reason, I've split the difference at the with a weighted sum toward the lower of the two scores. The use of VAE models to estimate network structure is clearly very powerful, and the estimation of network structure from time series outputs of complex systems is an important and general open problem with many useful applications in the analysis of several modalities of neuroimaging data. This paper contains a detailed description of just such a model that is designed to function at small time scales capable of modeling such dynamic functional connectivity activity. While it does not make a good case for comparison to other supposedly state-of-the-art models, the potential for utility is quite high within the neuroscience data analysis community. The author has proposed a very interesting model in this submission, but the same level of technical rigor deployed in model development has not been distributed to the many aspects of its testing and validation. I would not recommend this submission's acceptance in its current state.""",2,1 neuroai19_33_1,"""The paper is very ambitious. If all the claims in the conclusions could be verified, this could result in an important contribution, but there's no evidence for any of them. It is not clear why the amplitude was binarized in high versus low and not used as a continuous regressor. Why are Anovas used? Which are the multiple levels? Anova tests should be corrected for multiple comparisons (main effects and interactions), see Cramer, A. O. J., van Ravenzwaaij, D., Matzke, D., Steingroever, H., Wetzels, R., Grasman, R. P. P. P., Wagenmakers, E.-J. (2015). Hidden multiplicity in exploratory multiway ANOVA: Prevalence and remedies. Psychonomic Bulletin & Review, 23(2), 640647. doi:10.3758/s13423-015-0913-5. Since the results are obtained using scalp recordings, it cannot be concluded that the results are specific to the human brain, and to a specific location within it. Fatigue and time on task are not taken into accound, and they have been proven to influence the spectral power in attention related tasks. The frequency of the flickering stimuly is never mentioned. So I had to guess that it was 14 and 18 Hz and trusted the authors on their correct isolation. The choice of ANOVA tests is not clear. Not so much AI. A very elegant and comprehensive study on the effect of attention on the amplitude of resonance frequencies is the following one Gulbinaite, R., Roozendaal, D. H. M., & VanRullen, R. (2019). Attention differentially modulates the amplitude of resonance frequencies in the visual cortex. NeuroImage, 203, 116146. doi:10.1016/j.neuroimage.2019.116146 Of course it is not the sole way of addressing these issue, but most of the points that seemed not clear to me in this submission, are properly addressed in the paper above.""",2,0 neuroai19_33_2,"""The authors present a useful application for the BMI: delivering a stimulus during different attention states. It would have been useful to do a thorough comparison of how different high/low threshold values could affectthe results. The submission is clear. Applicable to neuro. The authors establish a correlation between EEG-SSVEP power and perceptual accuracy in an attention task. However, the final conclusions need further support. The result that accuracy increases for target but not distractor warrants further investigation. Is there a correlation between the target and distractor SSVEP power, and should we expect an effect because of this? The negative result on reaction time also needs further analysis: could this be because the high threshold was not sufficiently high?""",3,0 neuroai19_34_1,"""The paper addresses the potential for modelling working visual memory processes with recurrent neural network architectures. Understanding these mechanisms is an important task in neuroscience, but I do not believe that the computational modeling approaches presented here are sufficient for publication. The development of the models presented is not discussed and only references other work. Model implementation is also not discussed. The results presented show evidence of systems learning more accurate representations of the labeled data that has been presented to them, but the differences between model performance metrics is not adequately explored here. Statistical significance statements are non-existent and no form of hypothesis test is presented regarding model accuracy. One can assume as much that a conjugate prior for the binary classification task can easily enough produce a 95% confidence statement regarding ""chance"" in this sort of test, but no such statement is presented. While the overall goal of the paper is clearly stated in the abstract, the publication loses clarity after the introduction. The ""models"" section is structured in an unusually segmented manner that fails to adequately detail the functional or structural similarities between them. Furthermore, the language use is poor. Overlapping clauses and run-on sentences dominate much of the text in this section. The results section lacks a clear message comparing the performance of the different models. The figures show positive results regarding the ability of the models to perform a binary classification task on the CIFAR-100 image database, but the comparisons are incomplete. Moreover, the lack of any baseline comparison metric or statistical significance statements makes their importance hard to interpret. Figure captions themselves are hard to interpret, as they contain incomplete sentences and don't fully detail the information shown in the figure. The biggest lack of clarity here is found in the gulf between motivation and result: if they're ostensibly attempting to model an organism's visual working memory functions, they haven't stated what organism that is and furthermore haven't made any real connection in the work between their presented models and the physiological systems that they're attempting to model. The only mention of biomimetic form or function is made in passing, and the reader is led to assume entirely the results of cited works by Braver and D'Ardenne. No confirmation or recreation of those cited results is attempted here. The presented models are clearly inspired by the structure and function of cortical visual stimulus processing regions, but that connection is at best one directional. No loop back to biological function is made in this paper regarding validation of their presented models. They state that the goal of this work is to apply these systems to the analysis of actual recorded data; such work may provide the full connection required to validate some aspects of the work presented here.""",2,1 neuroai19_34_2,"""The authors test different mechanistic models of working memory. They conclude that a model trained with reinforcement learning outperforms the other models. However, the conclusions are hard to assess because the submission lacks detail. It is difficult to assess rigor. The submission lacks clarity and detail. Testing different mechanistic models of working memory is important for both Neuro and AI. One strength of the work is that the authors relate their models to biological models of working memory. However, the conclusions would be stronger if more details were included: 1) more details on the testing procedure, 2) what statistical test was used to arrive at the conclusions, and 3) what are the errorbars in Figures 2-4?""",2,1 neuroai19_34_3,"""The paper focuses on working memory and reinforcement learning, which is an interesting topic. However, the choice of task is not a good one to probe this topic. The task as described is simply a familiarity detection task, and there is no reason to think that this is a good task for RL. In general, I do not see much that this study provides beyond previous work on deep RL. As described above, the choice of tasks is poor, which likely contributes to the modest improvement using RL as compared to supervised learning (it is not clear if this improvement in statistically significant). The paper does not provide intuition for the reason behind this purported improvement. Insufficient intuition for the results is presented. The authors are somewhat loose with their definition of a ""context"". The general topic has the potential for interdisciplinary interest, but only if the task were changed to something more appropriate. The authors do not analyze the representations formed in the networks they study, which would be a necessary step to connect their approaches to biology.""",2,1 neuroai19_35_1,"""CNNs have been explored thoroughly with primate and human data. Given the extensive use of mice in neuroscience research it is very important to understanding how these models relate to those systems and ideally how to build a good model for mouse data. The authors seem to take into account all the relevant complexities including image scale, RF size, running speed, etc. I would be interested to see how the results would differ if trying to train and predict on average responses over many trials of the same image, as this would be a more direct comparison to what is done in primate research. The random weight test was an important and insightful control to do. The presentation is mostly clear and straightforward. One point of confusion as to how the model was trained: the paper says ""All components of the model are trained jointly end-to-end"". I initially believe the VGG core wasn't trained, but this statement made that unclear. This work directly compares a task-trained ML model to mouse data and so is directly at the intersection of ML and neuro.""",4,1 neuroai19_35_2,"""-- The surprisingly high power of randomly weighted DCNNs is a point that has popped up a couple of times in recent human fMRI / MEG work. The present paper makes the important case that random networks should be included as a matter of course in DCNN modelling projects, and sounds a note of caution about the field's temptation to over-interpret the particular features learned by high-performing trained networks. -- Comprehensive data measurement and modelling pipeline. Use of the same spatial transformer model with an interchangeable bank of input features is elegant. -- Very well written. Figures exceptionally detailed and thoroughly labelled. Methods described clearly and in good detail. -- Mostly neuroscientific, but addresses the important topic of how models from machine learning can best be used in neuro research. -- Generally, great paper. Clear presentation of thorough work, exploring an important question. -- Would have been great to include another Imagenet-trained architecture, since different architectures have widely varying macaque brain predictivity, and that of VGG16 is not particularly high (Schrimpf et al., 2018 BrainScore). -- I'm not a big fan of the asterisks in Figures 3A and 3B used to indicate the best layers in various model tests. It doesn't provide any additional information to the data lines themselves, and it leads the reader to expect these indicate statistically significant comparisons. -- Typo page 4 line 158: ""pray"" >> ""prey""""",4,1 neuroai19_35_3,"""Object recognition networks have been the benchmark model for a long time, but other options have not been explored in depth. This paper points out that networks with random weights fit mouse visual activity just as well as models trained to perform object recognition tasks, suggesting that the object recognition task itself is probably not an ethological comparison. Thoroughly thought-out model and fitting procedure. Very clear. Strong intersection. Would have liked to know more about which layers of the random network corresponded to neural activity, as a sort of benchmark of how many non-linear RELUs the pixels might need to go through before obtaining a representation that's as similar to neural activity as VGG.""",4,1 neuroai19_36_1,"""While the question of how neural networks may act over concept space is important, I dont think the approach used by the authors correctly adress this question. The work of Hill et al. (2019) very clearly addresses these questions by devising tasks that require generalization across domains, showing how training regime is sufficient to overcome the difficulties of these tasks, even in shallow networks. I dont see how the current work adds more clarity to this research direction. The main point relies purely on a visual representation of the top PCs of the penultimate layer of a CNN, which I believe is insufficient. The authors should have identified a task where networks trained on MNIST perform poorly, and then propose a different strategy or architecture. Overall the writing is relatively clear, but it would have been beneficial to describe the hypotheses more explicitly, e.g. what neural activity would be expected for a place, grid, or concept representation with respect to MNIST. The question of how the brain and artificial network can perform relational reasoning is critical in both fields, since many believe that it may be one of the primary ingredients of intelligence. Its also critical to understanding the function of the hippocampus and entorhinal cortex in humans.""",2,0 neuroai19_36_2,"""I do think that investigating under what conditions in artificial networks grid cells appear is very interesting. However, I was not fully convinced that the results presented here made substantial contributions to the AI or neuroscience field. While the specific techniques employed by the authors seem perfectly fine and relatively rigorous, the question itself (do hidden units in the later layers contain grid-like patterns) felt rather simple and uninformative (simple is great, as long as it still tells you something interesting). The paper was overall relatively well-written and easy to follow. The figures were simple and easy to interpret. The authors made their methods, results, and claims quite clear. The authors investigated neuroscience-informed properties of DNN. I think that searching for neuroscience-inspired properties in deep networks can be interesting, and is certainly within the intersection of AI and neuroscience. - I am not convinced that the ability of deep networks to solve analogical problems relies on the presence of grid-like properties in the hidden units. Perhaps this is ignorant of me, but I think that this is a critical point for the paper and thus needs to be better motivated and explained. In particular, I think that while grid cells can support path-integration, not all networks that path-integrate necessarily contain cells with hexagonal symmetry. - I am not surprised that the network trained by the authors does not show grid-like responses. It seems reasonable that the network learned to classify each number separately, without learning the full manifold. If someone were to record from a real brain from the visual areas while the animal was performing a discrete visual classification task, I am not sure that they would see grid cells there either. Thus, unfortunately the current paper reads that the authors trained a network that didn't need to learn the full manifold, so it didn't, and then didn't show properties that one may (or may not) expect for it to exhibit if it had learned the full manifold. I think there could be something interesting in this endeavor, but the implementation carried out by this paper wasn't very convincing to me. - A minor (but important) comment - grid cells have not yet ""known to support path integration in rodents and humans"", since there is no casual experiment that shows their necessity. I think this statement is also indicative of my general complaint - that the importance of grid cells is not fully fleshed out or supported in this work, and thus the lack of grid cells in networks is over-interpreted. """,2,0 neuroai19_36_3,"""While the general method of training neural networks and examining their representations for key insights and relations to neuroscience is a valid one, the methods and results do not seem to answer it in a way that is principled or well thought out. There is also substantial misunderstanding about neuroscience concepts throughout the paper. There are major logic / misrepresentation issues throughout the paper, some based on incorrect assumptions and possible misunderstandings. The neuroscience motivation lacks consensus in the community and some concepts are wrongly characterized, such as the definition of path integration. This greatly weakens the motivation and connections to deep learning. The main method is a simple analysis of vanilla CNN trained on MNIST classification. The results are fairly unconvincing and not substantially justified in the approach. For example, the justification that this wont allow the network to navigate in concept space seems arbitrary. Finally, it implies that classification is the right task to train the network to perform tasks based on relational reasoning. What about unsupervised tasks, and training with other models such as VAEs? Design choice and analysis are poorly justified, with some ambiguity about how the results were obtained. Its also unclear why the architecture, optimizers, dataset were chosen. Certain methods were also not justified - why limit the analysis to last two layers? Also why is figure 1 focused on the output layer, which are trained to represent the classes? Not clear on why the plot shows place-cell like activity. what is the justification there? tSNE embedding space cannot be interpreted as its not linear. Similar problems apply to the PCA analysis. Grid cells represent cells that fire in response to various locations, not separate cells that fire to resemble some geometric pattern within the brain. Aside from the obvious take, the intersection is somewhat strained. Unclear what the ultimate goal of the paper is - as the idea appears to be based on a flawed understanding of mechanistically whats required in the brain to solve certain problems. The authors propose training a deep network and seeing if activations similar to concept and grid cells exist in the hidden layers. The motivation, however, is strenuous, and the results are not convincing. It is unclear why these cells are necessary and sufficient to do relational reasoning as the authors claim, and the PCA / tSNE don't seem to address the question the authors sought to address. """,1,0 neuroai19_37_1,"""The authors have studied the problem of how the degree of neurons in a network influences their ability to learn. The idea is that degrees are less flexible than the weights of connections. Therefore, for a neuron with fixed degree, the ""size"" of the space of possible weights should be maximized. The size is computed 3 different ways, and this model is applied to the Drosophila mushroom body connectome. This is a fresh approach and has implications for AI; unfortunately they are not emphasized. I think the mathematics are correct and well-explained through Section 2. I thought the section on maximum entropy was harder to follow, probably because the inherent mathematics is more complicated. This could be alleviated by adding references & maybe pointing to an appendix. I would not care if the references extend beyond 4 pages; you can also eliminate line spacing there. You can also gain space by using the commands instead of for the parts of Section 2. Specific suggestions: * L. 39 strike ""synaptic partners"" and use ""degree"" * L. 40, I'd add ""J_i 0"" when defining J_i. * The volume & area of the simplex (ll. 58 & 73) need references. * Notation S_K is not defined (l. 92), but I gather it is the volume/area calculations from before. I would suggest using S_K^{ net}, S_K^{= net}, and S_K^{individ} or something along those lines to clarify that these are the different ways of measuring ""size"". In fact, the language of ""size"" throughout the paper is kind of confusing until Sec. 2.1 when things become concrete. I would mention in the intro that you will use volume/area as ways to measure size. * Ll. 67 & 79 ""and vice versa"" It is unclear to me what the vice versa case is. Clarify. * L. 93 ""K^max"" isn't defined, strike ""for large K^max"" * L. 99 what is S? How you use the weights is muddled. * L. 99 ""Laplace approximation"" and ""model evidence"" need references. I gather that ""model evidence"" is something like log-likelihood; be more precise. * Ll. 101-102 in the binomial random wiring model, how do weights of the connections enter? The paper is well-written and for the most part easy to read. As mentioned in the technical review, the language of ""size"" should be made clearer in the introduction by stating that ""size"" will mean volume or area under different assumptions. Similarly, using the precise language of ""degree"" is preferable in my opinion to using ""partners"". I would also spend a little more time in the intro motivating why ""size"" is something that'd be optimized. Can you learn more with a larger size, and do you think this connects to measures of dimensionality or complexity in ML systems (Rademacher/VC dim)? Also, constraining the degree is kind of what happens with convolutional layers, although they are very non-random. There is no discussion section and there should be. What are the take-aways from this analysis? How do we interpret the conclusion that ""other factors come to dominate"" (L. 118) the network as Drosophila develops? I would like more speculation, for the biologists. Similarly, your model predicts an optimal K* dependent on various parameters; whereas this is known for mushroom body to be ~7 (in cerebellum, arguably similar, it is ~4). I'd like some discussion of whether your model is predictive of these properties or what it says about those networks' computational ability. * L. 3 ""size"" -> ""sizes"" & ""determines"" -> ""determine"" * L. 6 ""partners"" -> ""neighbors"" sounds better to me * L. 17 ""learning rule"" is ""not known"", but what about Hebb/anti-Hebb STDP rules? * L. 20 ""consider the hypothesis"" -> ""hypothesize"" would be better * L. 25 ""regulate"" is used twice, I'd change the second to ""stabilize"" or ""normalize"" or similar * L. 34 ""We find that overall, ..."" -> ""We find that, overall, ..."" * L. 36 would be nice to have some speculation about this ""developmental progression"" * L. 39 Suggest rephrase to ""where a neuron has degree K"" since you've already introduced degree = # neighbors = # partners * L. 54 for balanced references, I'd add ref to recent work of Arenas on experimental verification of this scaling * L. 60 ""measureable synaptic weight changes"" could mention ""i.e., # of discrete vesicles"" * L. 69 ""different types"" I think you mean ""many types"" of neurons <- plural * Figure 1: Suggest adding ""bounded net"", ""fixed net"", ""bounded individual"" labels to each row, on the left hand side under (a), (d), and (g) * L. 90 ""provides a cost function"" is awkward, maybe simply ""determines"" is better * L. 118 ""Other factors come to dominate their wiring"". What would these be? Since binomial is a good fit, would you say the network is random or not? * L. 74 ""net excitatory"" strike ""excitatory"", you haven't talked about E/I at all so this is confusing I think this is the weakest part of the submission. The motivation for this work is almost entirely from the biological perspective. I think that this work probably does have some implications for AI, but it needs to be discussed by the authors. Places for this are in the introduction & potentially the discussion if added. (I am actually uncertain whether this workshop offers opportunity for revision, but I am writing my review like I would any paper and hope the authors will at least consider making some changes for their next version.) To an AI person, familiar with statistical learning theory, it will probably be hard to find a good take-away from this work. I think a natural connection to try and make would be to the complexity of learning with such a network. Constraining the weights in a network to lie within some ball of radius R is a way to bound the generalization error. The idea of constraining the weights in a network is closely connected to classical types of regularization, which bound the norms of the weights. Another possible connection to illuminate would be to weight normalization techniques including batch normalization. Pruning neural networks to reduce their size is another area to look into, since that also reduces the degrees from fully-connected. How to structure a network in terms of in- and out-degrees of neurons is a fundamental question. In neuroscience, this area has been tackled more because neuron degree is easy to measure experimentally. On the AI side, there hasn't been as much focus on this kind of network structure, with the most common structures being fully-connected or convolutional. So I see this work as having potential relevance there. But more work will have to be done to see whether artificial neural networks end up following these same kind of principles. I like the simplicity of the analysis in this work and the dataset that it is applied to. I only wish there were more discussion of the take-aways for both neuroscientists and AI researchers. But I see that interest for more as a positive rather than a negative.""",4,1 neuroai19_37_2,"""This paper addresses questions that are important to fly olfaction research, but the discussion doesn't say much about how it can provide insight for AI research. The methodology present in the paper appears fine, but I have some questions about the neuroscience aspects: - Is there any experimental evidence that KC synapses undergo synaptic scaling? - Is there any experimental evidence that young flies have non-random KC connectivity? - What was the threshold for determining significance between models? I ask because it seems the evidence is very close between the binomial model (which is experimentally suggested) and bounded/fixed net weights for single-claw adults, and a small change in what it means to be significant would lead to the result that synaptic connectivity in adults is also better supported by fixed/bounded weights. The paper is nicely written, but it reads like it was originally a much longer paper compacted down to four pages. For example in Section 3, there isn't much detail on the maximum entropy and Laplace approximation methods, or how the connectomic data was integrated into the analysis. Also, Fig. 2 could use some sort of adjustment to separate out the blue and orange lines. Right now this finding falls mostly into neuroscience, and I feel this paper needs some added discussion on how the study of fly olfaction can be used to advance AI.""",3,1 neuroai19_37_3,"""In this work, the author(s) first characterized how various homeostatic constraints influences the optimal connection degree, then studied whether the connectivity structure found in Drosophila olfactory circuit is consistent with those homeostatic constraints. The results suggest that the model with the constraint on the net weight provides better fit for immature KCs than the binomial wiring model. Although the work has some merit, Im not convinced of the biological relevance. In the manuscript, the parameter space was defined as a K-dimensional space, and the authors optimized K under some homeostatic constraints. However, considering the actual neural circuit, the problem should be defined as the problem of choosing (linear) K-dimensional subspace from N potential space (see eg. Ashok-kumar et al., Neuron, 2017). Thus, the benefit of having large K is underestimated in this study, which somewhat weakens the biological relevance. In addition, the author(s) should discuss the range of p and J for which the optimal degree actually exists, as Eqs. (2) and (3) dont have any (real) solution at wide range of p and J. If you naively apply the stirlings approximation, you get slightly different expression for Eqs (2) and (3). The author(s) should clarify the approximation they used. I couldnt get how the results shown in Fig. 2 are calculated either. In particular, the accuracy of the binomial model highly depends on whether only the connected pairs were fitted or all the potential connections were considered. Moreover, in the former scenario, the distribution needs to be shifted by one to cancel the selection bias. Yet, these points were not discussed in the manuscript. The choice of regularization is still an important topic in ML, and I believe it is insightful to study what kind of regularizer is used in the brain. Because PNs-to-KCs connections are arguably not plastic, I'm not sure an analysis based on the weight volume is biologically relevant, even if the combinatorial term is added up to the model. Still I believe this line of work is important for understanding the wiring principle of the brain.""",3,1 neuroai19_38_1,"""The paper describes the attempt of classification of a very specific and elusive feature, namely dream recall. While in principle the effort could be important, there is no evidence of additional insight on the neuro/cognitive feature under exam. Nothing is said on data processing concerning artifact removal. In this sense we cannot be sure that the features responsible for the separability are related to neural activity or other (movement, etc). Aso it's not clear how and why the groups were divided in ""high"" versus ""low"" dream recall, and what this means. Also scalp signals contain a mixture of activity coming from different brain regions, on top of external signals, physiological artifacts etc. In this sense it's misleading to talk about brain signals, and even more of brain networks, in this context. The paper is well written. AI techniques are used to solve a neuroscience problem, but there is not enough evidence that neuroscience is involved or benefitted in this case. You could look at features which are more directly related to brain activity and less influenced by mixing and volume conduction such as the shape of the waveforms, and the presence of bursts. See also Wong, W., Noreika, V., Mr, L., Revonsuo, A., Windt, J., Valli, K., & Tsuchiya, N. (2019). The Dream Catcher experiment: Blinded analyses disconfirm markers of dreaming consciousness in EEG spectral power. doi:10.1101/643593 for a set of measures which could be used.""",2,0 neuroai19_38_2,"""I think this is potentially a first step in an interesting direction, but without the ability to interpret which features of the EEG signal are important for classification there is little insight to be gained. The model and training pipeline are well-devised. I would, however, like to see error bars on Figure 2 to understand to what effect these differences are significant. The paper is easy to understand. Though a deep learning model is fit to EEG data it does not (at the moment) teach us anything more about dream recall than the linear classifier. It would be interesting to see which features of the EEG signal contribute to the classification.""",2,0 neuroai19_38_3,"""Classification of neural states is an important problem. Experiments weren't entirely convincing. The paper was relatively clear. Application of NNs to EEG data but little links back to neuroscience. Overall comments: The experiments conducted do not necessarily provide a strong argument that supports the authors' claims. Detailed comments: Fig 1 - The description of the model architecture is a little confusing. Sticking with the convention of (depth/number of filters, width, height) wouldve worked better (as done in table 2).There is also an error in the text in figure 1, where depth and width have been interchanged.While the sizes of the filters have been mentioned, the number of filters used per layer haven't, explicitly. Given the errors in the figures text, thoroughly understanding the architecture is a little problematic. Also, the first two convolutional layers are described while the rest aren't, and the authors immediately move onto the output of the feature extractor, which is a little disconcerting. Line 82 - Unclear what the authors mean by feature space. Would be a good idea to mention that these are the outputs of the feature extractor in the text as well as the caption of Fig 3. Lines 90-91 - While it might be true that the feature space as defined by the author does not correspond to subject specific features, it should be noted that these features are then further passed through several fully connected layers. It is possible that some subject-specific over-fitting might occur in the deeper layers of the network. It might've been more convincing if the features at the last hidden layer had been visualized to make this point. Fig 3 - In the same vein as the comment made about lines 90-91, it would've been interesting to see the t-SNE clusters coloured by HDR and LDR. Doing so would've also provided the reader with information if the network was learning features that allowed to cluster and therefore classify between the two classes of interest. Lines 94-95 and Fig 4 - Do these visualizations provide us with any information? Does it make sense for these electrodes to be important for these sleep states? Guided backprop helps say which features are important for a particular sample. The authors fail to mention which subject these visualizations belong to. Are they taken from a subject that's in the HDR or LDR group? Or are these averaged across all samples? Are there any differences in the electrode importances between the two classes? Averaging the electrode importances for the two classes separately and showing them both for all sleep states would've been a more informative figure with respect to dream recall classification.""",2,0 neuroai19_39_1,"""The authors use sparsification to study continual learning. They claim this is superior to previous approaches that expand networks for subsequent tasks or penalize changes in previous weights. That being said, I am not convinced that this approach is really that different from previous approaches that expand network size with new tasks, since the authors are essentially forcing each task to use largely nonoverlapping subsets of the network The authors compare their results on permuted MNIST and split CIFAR. For the latter, the results are compared only to Zenke et al. 2017. It would have been nice to see a comparison to a network with non-fixed architecture but comparable network size after training on all tasks. The paper is well-written. However, additional discussion about the central assumption of the model, that the ""interference"" weights can be set to zero and ignored, would be helpful. The authors attempt to connect their results to neuroscience by noting the plausibility of their approach. However, the results seem to suggest a sparsening of representations from lower to higher layers in the network, which at least for the visual system seems it may be counter to the experimental findings. Also, there is no discussion of the biological process corresponding to the determination of which weights are ""interference"" weights during the learning of a new task.""",3,1 neuroai19_39_2,"""This is a clever idea, implemented well, and showing good progress on an extremely difficult and important problem. The methodology and analysis are as rigorous as field standards. I might have liked to see plots of the validation performance as a function of the three hyper parameters optimised using grid search, to get a feeling for the robustness of the methods (the plot in Fig 3a implies that the results are quite sensitive to these choices). This is an excellently written paper, carefully covering the background literature, well-paced intuitive explanation of the key idea, and straightforward presentation of the results. The innovations are biologically inspired, but it is clearly an ML paper. It is not obvious to me that the findings have any direct implications for our understanding of the brain. It would be great to back up these empirical findings with some mathematical analysis, even on a toy version of the model. The idea makes intuitive sense, but fully exploiting it and indeed understanding its limitations is going to be hard to do with experiments alone. This may for example help with principled selection of the hyper parameters depending on the data structure.""",4,1 neuroai19_39_3,"""This paper attempts to address an important problem. The method proposed are intuitive and reasonable, which could potentially inspire future work. - The authors tested the method in a two sets of experiments. The task is created based on permutation/split of images, thus the tasks are quite similar. Did the authors tested quite different tasks, for example, learning to classify MNIST then CIFAR and so on? - In terms of parameter m, the authors used 0.05%-2%. Would these numbers generalize to new tasks? I found the writing is generally clear. It is not difficult to follow the paper. The paper would be stronger if the authors could refer to some neuroscience literature on pruning of synapse in the brain. The paper proposes a new method to perform lifelong learning. The basic idea is to prune the neurons of zero or low activity and use these neurons for later tasks. The pruning procedure leads to a set of weights which could be changed freely without causing any change to the output of the network. I have not been following all the previous work on continual learning. But I really like the idea and the approach the authors are taking. The results shown in Fig. 3 are promising. Overall, I think this is a strong submission. """,3,1 neuroai19_40_1,"""The ideas presented here are novel as they show how neuroscience principles such as modularity and population coding can be adapted to achieve successful learning for RL tasks. The technical information provided is sufficient for following the paper. However I wish the materials under 2.4. were explained in further depth in terms of equations defined in 2.1 - 2.3. The paper is easy to follow and overall well written. The paper is well positioned in the intersection of AI and neuroscience, and shows how knowledge from neuroscience continues to inspire new novel frameworks for AI. The authors develop multi-agent learning framework with spiking neurons to solve reinforcement learning tasks. Authors adapt generalized linear model (GLM) as spiking agent and use local learning rules modulated by global reward prediction error to train the network. In addition, authors complement the framework with brain inspired modular architecture and population coding to reduce the variance in learning updates. Authors applied the framework to two RL tasks to demonstrate its potential as viable optimization technique. The value in this work is that authors adapt brain inspired principles such as spiking neuron, modularity and population coding into their framework and demonstrate each principle contributes to learning in RL tasks. The successful adaptation of neuroscience principles in this work is a good example of how neuroscience can promote a novel framework for AI. """,4,1 neuroai19_40_2,"""Making the best use possible of global error signals may be very important for solving challenging machine learning tasks with a neurally plausible algorithm. The experiments are clear and demonstrate the efficacy of the two proposed variance reduction techniques. However, I see two technical issues that may limit the scope of this work: 1) It seems the number of timesteps simulated is very low (5 for gridworld, if I'm interpreting ""spike train length of 5"" correctly), which makes it unclear how the networks described relate to event-driven spiking networks operating in continuous time, since the representation sparsity is so different and the information throughput of the cells in the paper seems limited. For example, using ensembles may have less of an effect on networks with more information throughput per cell. It would be good to compare an ensemble of 10 networks to a single network with 10 times as many cells, and a single network running with 10 times the temporal granularity. 2) The networks tested were very small, and Fig. 3b shows cells struggling to learn from 200 inputs. This makes me unsure how well the proposed approach can scale. Also, it seems the networks in Fig. 3a may not have finished training. The writing is generally quite good, though there are a few parts that are either a bit imprecise or hard to parse. e.g. - I don't understand the sentence that begins at the end of page 2. - The neuroscience of ""modular structures"" invoked in section 2.4 is vague. - The jump from ""population coding"" to ""ensemble model"" seems a bit unmotivated. - I don't understand the first sentence of the cartpole task description. - The term ""computational power"", in reference to [Maass, 1997] is vague. Learning to accomplish standard tasks in the ML::RL community using global error signals, with neurally inspired variance reduction techniques seems like a good fit for this workshop.""",3,1 neuroai19_40_3,"""The idea of considering individual cells as ""firing policies"" might advance learning in spike-based systems. The idea of learning spike train generation through RL is interesting, however, it is questionable if this method will scale well to larger systems. In particular, as its number of communication partners increases, a neuron has to deal with a more non-stationary environment, making learning in systems of non-trivial size hard. Thus, it is unlikely that a framework of this type would work in systems with more realistic sizes (i.e. number of neurons on the order of neurons in biological brains.) It would be nice if some results for larger systems (e.g. tens of neurons) could be shown... - The presented work is at the intersection of Neuro + AI""",3,1 neuroai19_41_1,"""This work has important implications for the psychiatric research community, and may be for thinking about reward normalization / reshaping in deep / tabular RL. However, the results are not yet totally convincing as its relevant to only one simple task in a tabular setting. The method proposed is quite simple. There is a need for more experimentation with a wider array of tasks in order to be able to facilitate the authors claims, since the authors do not fully elucidate the connection to RL in more relevant tasks. Itd be interesting to explore this idea in deep RL with commonly used tasks for the authors to be able to make the claim that they 'outperform state of the art algorithms'. Overall, the authors seem overly enthusiastic about the prospects of some of the results. The source of the performance gain appears to be possibly from reward normalization / reshaping. While there is a connection, its not clear if psychiatric disorders are / should be the source of inspiration for doing better reward normalization / reshaping. While the motivation of the paper is clear, the method is not explicitly described and requires some digging to understand. Notations are not entirely clear as some deviate from standard RL notations. For instance, important algorithmic details are neglected such as how value tables are updated? Is the task tabular or approximated using deep methods? Was there an eligibility trace? Also, task could be better explained as it is non-obvious to most readers. What are the justifications for comparing with this task, which seems inherently biased to benefit algorithms that learn multi-modal distributions rather than point estimates? There is also confusion about how the numbers were generated in the end. Also, there is not enough explanations to help the reader understand the figures, especially given that the task is highly specialized and described quickly in words without explanations for the way its decided. The authors propose using Q-learning as a framework for modeling individuals with different known reward preferences in psychiatric disorders. The intersection is there, although the authors are drawing connections in specific areas where its lacking. For instance, RL can be characterized generally by methods of doing value updates or propagating information about rewards through history. However, the authors are using the framework to examine a very simple, two-choice task. The authors propose modeling psychiatric disorders with reinforcement learning, through tracking both a positive as well as a negative q-function. There are presentation issues, and more analyses, tests are needed to convince the reader of the authors claims that that psychiatric disorders can serve a source of inspiration for designing better RL algorithms. """,3,1 neuroai19_41_2,"""This work is important. Task domain should be expanded. The work is convincing to the extend to which one can judge such brief articles. The article has been very well written. The article uses refined AI metrics to address neurological disorders. The overall approach could be very rewarding for both fields. """,4,1 neuroai19_41_3,"""This intriguing study proposes to modify the classical Q-learning paradigm by splitting the reward into two streams with different parameters, one for positive rewards and one for negative rewards. This model allows for more flexibility in modelling human behaviors in normal and pathological states. Although innovative and promising, the work is quite preliminary and would benefit from comparison and validation with real human behavior. No comparison with human data. The figures are hard to parse because of the very short captions. One needs to go see Appendix C to understand what the model used (SQL) consists in. The work has promising implications for computational psychiatry, but probably not for RL at this point. It would be good to compare and fit the proposed models to real human/primate behavior in normal and pathological conditions and make testable predictions. Also, it would be very interesting to use these models to predict situations that might trigger maladaptive behaviors, by finding scenarios in which the pathological behavior becomes optimal. """,3,1 neuroai19_42_1,"""This paper provides a straightforward application of SoundNet to predict fMRI responses to an audio stream. The question remains to what extent the presented results provide new results beyond those of pseudo-url and pseudo-url. Standard statistics are performed. Thresholding for threshold map seems quite arbitrarily chosen. It remains unclear which stimulus features are driving the response predictions. Some typos. E.g. improvising => improving; this results; ... Use of DNNs to explain observed brain responses falls right at the intersection. + Use of audio-based DNNs can provide interesting insights into which stimulus properties drive neural responses + DLPFC results could potentially point to novel properties that are predictive of responses - Novelty compared to existing work unclear - Insights about what stimulus properties are driving the predicted responses will strengthen the paper""",2,1 neuroai19_42_2,"""Not sure how much we can gain from this sort of study. The authors show that deeper layers of a pre-trained auditory network can be used to predict fMRI responses, albeit not very well. But what can we conclude from this? Would the same be true of a different auditory network with very different properties? How much does it depend on the specific structure of that network? Could it just be the case that higher level features appear deeper in the network and correspond to areas recorded by fMRI? All seemed reasonable, but I would have liked to have seen controls against different architectures. No problem understanding this work. Definitely relevant, a similar approach to what has been tried with much success in vision. I've given this the ""good"" evaluation because even though I personally am not convinced by this type of approach, it seems to be reasonably well done and I know that a number of people do find it useful and convincing.""",3,1 neuroai19_42_3,"""Understanding cortical acoustic processing is an important neuroscience goal. This paper doesn't however motivate the specific model being chosen, or what different layers mean. It is stated more like a prediction task instead of an understanding the brain task. From what I understood, the R2 is being computed as the maximum over an ROI, at least in one part of the paper if not all. the maximum is a very noisy statistic and is not very reliable as a metric for model fitting and improvement. From the paper, it seems that the authors picked the best fold to interpret (after looking at the results). This is effectively double dipping and negatively affects reproducibility. R2 values in fMRI single trials are typically low and that is ok. The solution is not to use the maximum (if I understood correctly the motivation). The authors should be reporting single voxel metrics (over the brain) or should be computing some mean statistics. The paper is understandable but some methods crucial methods details are left out (3.1 and 3.2) Using an AI algorithm as a model of what the brain is doing. Although here how the model (SoundNet) could be an analogy of the brain (e.g. what could the different layers correspond to) could be more elaborated on.""",2,1 neuroai19_43_1,"""I believe the concept of using predictive coding and unlabeled video data to train convnets is a great idea. However, the contribution of the authors does not appear to extend beyond combining existing data sets with existing network architectures. The work is lacking a discussion of the most recent work in the similarity of visual processing in convnets to brain data, which incorporate recurrence into convnets (Nayebi et al. 2018, Kubilius et al. 2018 and 2019), thereby potentially allowing for similar behavior as a PredNet. How would you expect those networks to perform when trained on unlabeled video data? It would have been useful to put these in context of the results of the algonauts contest, which pitched supervised methods such as Alexnet against user-submitted content. Does PredNet outperform other user-submitted models? For this result to be convincing, I would like to see some reasons why the authors think PredNet is outperforming previous models. For example, is there something different about the feature maps that support this? What precisely about predictive coding makes the similarity to brain data expected? Results were presented quite clearly, although datasets and methods rely entirely on previously published work, such that digging into previous work on PredNet and the Algonauts project was necessary for a full understanding. The question of how the visual world is represented in the brain is an essential question in neuroscience as well as for building successful machine learning techniques for artificial vision. It does not seem like predictive coding is the main thing going on in V1 (Stringer et al., Science 2019), so Id be curious how the authors think that should be taken into account in the future. Typo line 24 Moreover, we show that as (we) train the model Typo line 87 Second, the model does not rely on labeled data and learn(s)""",3,1 neuroai19_43_2,"""The authors seek to develop unsupervised training models that exhibit image responses correlated to observed fMRI and MEG activity. This work fits into a tradition of similar inquiries and shows that a predictive, unsupervised network can exhibit higher correlation to neural activity than a supervised convolutional network. The use of multiple datasets contributes to the rigor of the paper. While the similarity measures reported in the paper are based in literature, as reported they are quite opaque, and more detailed analysis would be needed to be persuasive. More detailed reporting of similarity (e.g. example trials or time-courses, alternate criteria) could be helpful. The paper poses an interesting question and develops it clearly. The empirical results reported are not straightforward to interpret. The paper asks whether artificial neural networks can be useful models of brain activity, and therefore sits comfortably at the interesection of neuroscience and artificial intelligence. The authors present evidence that an unsupervised, predictive-coding model of vision is more correlated to neural data in its responses than popular, supervised models. This is further evidence that supervised feedforward models fail to capture something substantial about natural vision, although the particular predictive implementation described here might or might not be descriptive of reality. Without detailed insight into the similarities, it is difficult to evaluate whether the similarities are persuasive. While the authors report similarity between PredNet representations and data as compared to feedforward architectures, theres a question that seems to be unaddressed which may be technically difficult to address fully, but is necessary to understand at least qualitatively: is the performance of the predictive network, at the volume of training data used in this study, comparable to the performance of the feedforward networks? An answer to this question in either direction would not detract from the results presented in the study, but would clarify what existing models have and have not captured about image processing in the brain.""",3,1 neuroai19_43_3,"""This paper shows data that refutes a previous result that representations from unsupervised models are not good at predicting visual areas (mainly IT). Instead, the authors find that a model trained to predict the next frame in a video is more correlated with visual areas. Minor: Which unsupervised models did the 2014 paper use? Should we call PredNet self-supervised? Only using ResNet and AlexNet are not enough as baselines as a lot of recent work has been done in this area. The paper appears sound. RSA with Spearman correlation is widely used in the field but I don't think that it is the best approach to relate different representations as its properties are not theoretically analyzed. There are other theoretically derived similarity measures or other approaches such as encoding models that I believe lead to more interpretable results. It is also not mentioned how the noise ceiling was estimated and accounted for. The authors mention all results are statistically significant. Does that mean that the individual correlations are higher than chance or the bolded correlations are significantly higher than the others? Has multiple comparison correction been made? All details are useful and there is space for them. This paper is well written but misses many details that are important to understand everything that was done (see other comments). This paper is exactly at the intersection of (cognitive) neuroscience and AI.""",3,1 neuroai19_44_1,"""If they exist, finding functionally different groups of units in a DNN and using them to generate hypotheses in the brain is an important goal. The authors apply their technique to the second layer of a network trained to recognize digits. No evidence is provided that this network is functionally similar to the brain or that their technique would generalize to more complex sensory networks. Thus it is difficult to tell from the paper whether their potentially important technique is important. Many decisions were not well motivated, justified or described at least in brief. For example: choice of attribution method, choice of network and layer, number of clusters. The space of parameters and network should have been explored more thoroughly to be convincing. Or at least justification for why their methods did not require more thorough testing. Unclear whether units under study from MNIST were rectified or not. This would have a big difference on attribution depending on how sparse units were. Figure 1a and c are not well scaled and the contrast is very low. A log scale might have helped and larger labels. In line 65 it is not clear what 'the constructed matrix' is (neuron-sample, e, a). Similarly figure legends/labels in Figure 2 C are overlapping and tiny. Not vectorized so pixellated. They do mention that this method of clustering could be applied to neurons. Does not mention how this might work in practice or what would be gained or why it should be preferred over other clustering methods that have been applied to neurons. A potentially interesting and important technique for clustering. More motivation for the decisions in their study of this algorithm. Figures need to be improved in terms of contrast and relative sizes of figure labels. """,1,1 neuroai19_44_2,"""It is potentially useful to show if functional modularity exists in artificial neural networks. The authors claim they find functional modules by applying biclustering to the neuron-sample matrix but current analyses primarily evaluated on MNIST dataset are not clear and limited to tell whether the existence of functional modules. There is no explanation of the selection of tested layer and no comparison between results of other layers. Similarly, the choice of the number of biclusters is also confusing and no explanation is provided. In figure 2c, it looks like multiple biclusters contribute to the same class, then why not change the number of biclusters and evaluate more results. All figures are too small to follow and understand, especially figure 2c which has labels overlapped. The grey tone color of neuron-sample matrix makes figure 1 hard to visualize. Although the authors claim the usage of neuron attribution and biclustering can be applied to real neurons, there is no explanation of how exactly to apply. Other than that, this paper is mainly focusing on artificial neurons. It could be an interesting direction in terms of looking for functional modules in artificial neural networks via clustering methods but more rigorous analyses and clearer explanations are needed.""",2,1 neuroai19_44_3,"""The technique could potentially improve the interpretability of DNN models for feature understanding Some rigor is needed in the experimental evaluation. The DeepBind study is not clearly explained and lacks technical rigor or has not been explained clearly. The claim ""DeepBind model learns something from raw data"" is too vague. Technical details were omitted due to space constraints. The main idea and outline of implementation and methodology were explained. Other works considering attribution and importance scores have been cited clearly. A technique is discussed with application towards analyzing DNN models which are prevalent in machine learning/AI. This technique aims to make models and features learned by DNNs more interpretable, and the principle is inspired by biological hypotheses of functional modularity. Several questions are unanswered. The experiments demonstrate preliminary investigations, at best. More rigorous experimentation and crossvalidation are needed. Is the phenomenon observed across different datasets with more diversity? Is the possibility of memorization/overfitting ruled out? Is that even a concern, or is memorization what functional modularity implicitly refers to? Does the fact that the network is convolutional help in identifying the biclusters because now the neurons are more structured than fully-connected networks? The choice of attribution scores etc. is not clearly justified.""",3,1 neuroai19_45_1,"""The author(s) develop strategies to use partial measurements to constrain and fit whole-brain + body simulations of a c. elegan. The resulting models would be useful tools of significant interest for neuroscience. The manuscript could be made much more convincing by incorporating more efforts to quantify degree of model fits beyond qualitative assessment presented. I find the figure and associated legend somewhat unclear. The legend itself does not describe what is presented in the figures--instead that is done within the text itself. The manuscript also discusses panels within the figure out of order a bit. As someone outside of the c. elegans community, I also found some of the presentation of details lacking (e.g. how calcium data specifically are translated into constraints on the model). Some brief description of details could be useful to widen the relevant audience. Similarly, I know there are many other existing models/simulators of C. Elegan nervous systems. Some introduction or discussion to put this work in context with that existing literature would be beneficial. The work represents a nice example of improved modeling of a nervous system based on measured data. This is broadly of interest for neuroscience research, however the relevance of the work for machine learning/AI is not well-articulated in the manuscript. Interesting work using modeling and measured data to build tools for better interrogating biological neural networks in simulation. The model fits could be better quantified, and the relationship between this work and other existing c. elegans simulations that exist would be beneficial. """,4,1 neuroai19_45_2,"""The authors describe a new method for variational inference with pseudo-marginal likelihoods that can deal with a large number of variables and does not require a differentiable simulator. This could also have a lot of application to other biologically realistic life science simulations. They apply it to inferring the full state of the nervous system of C. Elegans from partial observation. The approach describes a variational method to infer the latent state of the nervous system of C. Elegans from observation of a subset of the cells via calcium imaging. They show that they are capable of reconstructing the latent state, and are also capable of repoducing Worm behaviour. Visually these are convincing, however, they do not show any quantification of the quality of reconstructions. The problems and solutions are clearly described. This paper develops a new optimization technique for inferring nervous system state, so is clearly at the AI-Neuro intersection. Strengths: A thorough and interesting new approach to the problem. Areas for improvement: Quantification of reconstruction quality. I would also like to know what the alternatives were if you didn't develop your PMVO method, and why they would or would not have worked as well.""",4,1 neuroai19_45_3,"""I am not expert in C-elegans models so it is hard to evaluate the contributions of this paper and their novelty. I however have concerns about the clarity of the exposition of the findings. The works seems rigorous but some clarity issues prevent good evaluation. Abstract unclear about the achievements: to allow imputation of latent membrane potentials from partial calcium fluorescence imaging observations. => What does it mean? Using state of the art Bayesian machine learning methods => Which state of the art methods? simulations investigating intelligent lifeforms in silico => What properties of lifeforms are being investigated precisely? Intro be able to simulate this interaction ad infinitum => what kind of infinity? Sec 3: ""Critically, neurons not directly connected to observed neurons (for instance VD6) are correctly reconstructed, indicating that the regularizing capacity of the model is sufficient to constrain these variables."" => Was this observed neuron not used as one of the calcium traces for inferring the model? What are the latent states? Unclear what part of the data is predicted and what part is observed. Fig 1 caption:"" Successful recovery of latent states conditioned on just 49 calcium traces, where the true trajectory is shown in black, as introduced in Section 4 "" => What are the latent states? Bayesian inference method: ""The method we employ for performing parameter estimation is a novel combination of variational optimization (VO) [10] and SMC evidence approximation. To our knowledge, this is the first time that pseudo-marginal methods have been paired with variational optimization methods. "" => Difficult to evaluate the importance of this new combination for the field (Disclaimer: I am not an expert). Interesting for neuroscience, less clear for AI. Promising study that would benefit from clarifications.""",3,1 neuroai19_46_1,"""The manuscript addresses whether artificial neural networks that better incorporate the discrete nature of biological neuron spiking dynamics would have different ""scaling"" in the number of model parameters versus complexity of problem that can be solved. This both addresses key challenges of AI network scaling and touches on what aspects of biological neural processing are critical to the brain's computational efficiency. The results could be improved by presenting performance comparison metrics evaluated across multiple train-test sets, rather than for a single set. Some sense of distribution of performance and statistical significance of differences between models would greatly improve the findings. The rigor (and clarity for broader audiences) would be improved by incorporating more detail on algorithms used (brief summaries and key equations) rather than purely pointing to citations. See comments in Rigor about providing more detail on algorithms. While the work itself primarily focuses on AI-motivated questions, the work addresses questions relevant to the interface between both fields. A solid paper that tests the hypothesis that the discrete nature of neuron computations may provide computational advantages by allowing artificial networks to solve complex problems with fewer parameters. """,4,1 neuroai19_46_2,"""Taken at face value this is a very impressive set of results. As the authors make the case in the introduction, readout and training costs for deep networks can be expensive in terms of hardware, energy and time. This much reduced model builds on previous work quite substantially, perhaps not in concept but definitely in performance as evidenced by the large decrease in error rates between different MST implementations. Very promising. The work appears principled, and since the method builds on a well-known previous study it can be assumed that the method is reasonably robust. However (and this is an obvious criticism) the method was only applied to one particular and not-too-common task. How would it peform on other tasks? Are there task domains where MST would be expected to fail? And are there tasks where MST could excel even further? None of this is explored or even speculated on in the paper, leaving me unsure how robust the results are. The paper is extremely well-written, insight is offered in almost every sentence, caveats and possible criticisms are frequently pre-empted. This is a true biologically-inspired machine learning method. It is based on fundamental biological neuron properties (spiking dynamics, synaptic inputs, temporal data), but trained on a standard supervised machine learning task using gradient descent methods. As mentioned above, the conceptual advance was a bit incremental over previous work but the results are very impressive and exciting. However I would like to see more exploration of other tasks to see where this method would work and where it wouldn't.""",4,1 neuroai19_47_1,"""The proposed model is essentially a constrained/specific parameterisation within the broader class of 'context dependent' models. The heavy lifting is seemingly done by well known architectures: default RNN & a feed-forward NN. While it does not seemingly add anything conceptual, the exact implementation is arguably new. The model description is nice and clear. I think a more persuasive bench marking could be done. Perhaps compare to reference models [11] or [10] rather than a 'vanilla' RNN, as this amounts to not using any prior information about the task (which, by construction, we 'know' is useful). Also perhaps report results from one of the 2 (mentioned) more complex benchmarks. Paper is clear and quite readable. The paper takes a crudely 'neuroscience inspired' concept (though, admittedly it could simply be 'task structure' inspired) and builds a simple model from it, which it benchmarks on a appropriately designed simplest-working-example. So it fits well with the workshop theme. I'd say a fairly 'standard' work for the setting. Only real point for improvement is more earnest bench marking/model comparison. Authors could also add some context by considering related works in the computational neuroscience literature, e.g. Stroud et al. Nature Neurosciencevolume 21, pages 17741783 (2018) and pseudo-url (though the latter is very recent).""",3,1 neuroai19_47_2,"""Its an open question in neuroscience what the purpose of neuromodulation is in learning and behaviour, given that neuroscientists know a lot about their effects on intrinsic properties of neurons. It's also difficult to develop RL agents that generalize across tasks well. This paper addresses these questions along a similar vein to recent approaches (Miconi et al., 2018, 2019). The authors implement a reasonable interpretation of the effects of contextual neuromodulation on the intrinsic properties of neurons via a recurrent neural network influencing the gain of learned scale and bias of node activation functions. The benchmark chosen is simple, and the treatment of the problem is rigorously addressed running over many seeds. The paper is well-written, with clear figures and descriptions of the model, task, and results. The paper introduces a neuroscience-inspired solution to training RL agents to a behaviourally relevant problem, therefore is well-positioned at the intersection of neuroscience and AI. Strengths: The paper is clearly written, well justified, and model is rigorously tested. Areas for improvement: The results are modest and I would be keen to see how the approach scales to more difficult benchmarks. The method they choose clearly reduces the variance in rewards gained, which is interesting in of itself. I would like to see whether this holds up. Additionally, I would like to see how this method performs when context must be inferred by the agent.""",3,1 neuroai19_47_3,"""The paper presents a novel structure for neural networks that can generalize to new tasks. The structure appears new where a first DNN computes some terms, z, based on a context. The term z is then applied to the weights in the layers in a second network. This structure potentially allows learning across new tasks. The methods are tested on a standard Meta-ML benchmark and appear to outperform state-of-the-art methods. The paper takes on very challenging state-of-the-art problems with a sophisticated network. The tests against the bench marks is rigorously and thoroughly performed. The paper was mostly well explained. I think somewhere early a general statement of the problem would have helped. For example, in Section 2, I think it could have been made more clear what is ""context"" and what is the training data and what is desired goal. How do we measure performance generalization. Also in the training section, some of the details were difficult to follow. But, that could be a result of the space. The problem of how algorithms can learn to generalize well across multiple tasks and use context is clearly central to both ML and neuroscience. The paper makes a case that the algorithms are ""inspired"" by biological systems. But, if the goal of the paper is to understand how true biological systems work, I think there needs to be more detail on how this architecture would map biologically. But, that obviously is a very hard problem and the results here should still be extremely useful. An interesting and novel structure for learning across multiple tasks. The results show improvement on state-of-the-art challenging benchmarks. Detailed strengths and weakness are above.""",4,1 neuroai19_48_1,"""The biological plausibility of deep learning models is an important topic of research both in neuroscience and AI. Recent studies have emphasized the importance of model architecture for having a more biologically realistic representations across different layers of deep nets (e.g. recent studies from DiCarlo's lab). In this paper, it's argued that model architecture might also matter for the success of biologically plausible versions of back-propagation such as feedback alignment (FA). The authors have compared the accuracy of the trained models on a test set for different hyperparameter values. The reported results support their main claim that after including skip connections in the model architecture, FA and DTP (two biologically plausible versions of BP) are as successful in training the model as the ordinary BP. It has been known that adding skip connections partially solves the problem of vanishing gradients with BP. It was conjectured that algorithms such as FA and DTP are not as successful in managing vanishing gradients (especially in very deep architectures) due to their less rigorous error back-propagation. This study provides yet another evidence for the importance of skip connections in dealing with vanishing gradients; a feature that also renders algorithms such as FA and DTP more successful in practice. The results presented in Figure 2 could benefit from more clarity. The hyper-parameters that were included in the test, and the way they were changed throughout different tests are not very clear. The motivation behind the study is clear. The results are presented in an understandable format. The only part that would benefit from more clarity is Figure 2 and the results related to this figure. The horizontal axis in Figure 2 is labeled with ""all combination of hyper-paramteres"", which is not quite clear what it's supposed to mean. From the models' performance, it seems that by going from 0 to 60 along the horizontal axis, the complexity of the models is increasing, though it's not clear at all what these numbers correspond to. This paper is related to important open questions in both neuroscience and AI. Skip connection, as they were proposed in the deep learning literature, could solve important learning problems such as vanishing gradients. The functional importance of skip connections has also gained some attention recently in neuroscience. This paper lies at the intersection of the both fields: the computational efficiency that skip connections offer to deep learning turned out to be important for biologically plausible BP as well. The question raised in this paper is very novel and opens up new directions for future research. More clarity in the testing process can improve the conclusions: the choice of models, hyper-parameters, etc. Also, adding the skip connections can potentially reduce the rate of change in the weights of hidden units that are not monosynaptically connected to the output layer. In a dense net, improvements in performance can be achieved by mainly tuning the weights that are one synapse away from the output layer, circumventing the deep credit assignment problem (the source of vanishing gradient problem). Showing the dynamics of weight changes across layers throughput learning can help understanding the way skip connections facilitate biological backprop. """,3,1 neuroai19_48_2,"""There has been a growing disagreement about whether the backpropagation (BP) can explain learning in the brain. Many biologically plausible algorithms have been proposed as alternatives for BP. Although these algorithms move toward biological plausibility, biologically inspired model architectures also need to be taken into account. This work looks at a biologically plausible model architecture in neural networks and argues that skip connections can improve the performance of such algorithms. Alternative BP algorithms such as FA and TP failed to get the same success of BP in deep networks. The authors claim that by using skip connections in the architecture they get the same results as with BP. Their experimental results support this claim. However, results for sensitivity of test accuracy for multiple hyper-parameters needs more clarification. In figure 2, it is not clear what numbers for all combinations of hyper-parameters exactly correspond to and how hyper-parameters like depth, learning rate, and the number of epochs were changed in different models tested. This paper is motivated by the questions that help to better understand the brain and look at current successful models for deep learning from a biological perspective. The idea of using skip connections with biologically plausible algorithms lies at the intersection of neuroscience and AI. It would be great if authors also use ResNet architecture and compare their results with that. Also, providing results for larger datasets such as CIFAR-10 or ImageNet with deeper networks would be useful. """,3,1 neuroai19_48_3,"""This work suggests that backpropagation may benefit from more biologically inspired architectures. Recent developments in AI support this line of argument, and the authors further extend this by incorporating biological structure into these frameworks. Work such as from Svoboda's lab highlights that sparse long-range neural connections can have important functionality in the behaving brain, so that this improves performance in artificial networks is highly interesting. The authors present data that skip connections can achieve the same performance as backpropagation in a neural network. This data is convincing, but may benefit from an increased parameter space and more detailed about the generation of skip connections. Overall the work is well-written and easy to follow. The figures would benefit from some additional explanations of what the parameter spaces mean (as can be interpreted by someone who is less of an expert in their field), and also how the biological framework for skip connections was derived. This work is very well situated between AI and neuroscience - as we are only beginning to fully understand the connectomic architecture of cortical layers in neuroscience, this may be an incredibly rich landscape in the future as we generate connectomic maps. The application of these connectivity structures to AI is a highly intersectional area, and future efforts in this area can inform both AI and neuroscience. This is a very interesting set of results and are of interest broadly. Many of the ideas presented here could be followed up in many directions, and this architecture is interesting for the computation of biological connections. Some explanation of the accuracy variations for depths 3,4, and 5 would be useful.""",4,1 neuroai19_49_1,"""Emotion detection from neural data could have interesting applications. It is very hard to infer from the paper if the analyses have been performed in a rigorous manner. There's no information on exact preprocessing nor a motivation for the use of this collection of ML algorithms. The paper unfortunately remains quite underspecified and has several typos. The reasoning about brain function remains vague. Standard application of a suite of ML algorithms to a neural dataset While the collected dataset could yield interesting observations about neural signatures of emotions I remain unconvinced given the present analyses and interpretation. A clearer description of the followed steps and an interpretation grounded in the neuroscientific literature will benefit the paper.""",1,0 neuroai19_49_2,"""Emotion recognition is a topic of wide interest. It is very difficult to judge the technical rigor of this paper as the methods and results are barely explained. It is not even clear what the classification task even is exactly. The motivation and intro should be much smaller and much more space should be give to explaining the experiment, the data, the features, the classification task, plots of the results, significance tests, and clear interpretation. There are very few details given about the methods and results. This paper uses machine learning algorithms to classify emotion from EEG.""",2,0 neuroai19_50_1,"""Interesting and important initial characterization and comparison of neural response dynamics between an RNN and neural data. Was not clear whether similarities were surprising or inevitable: e.g. would an untrained net have diversity of autocorrelation decay. No novel predictions or insights into neural data were provided or at least made clear. Results were convincing and carefully performed. The task DMS task seems simple enough (multiplication). It would have been nice to motivate training by gradient descent vs what could presumably be designed by hand. Perhaps it is not so simple? Quantification of variability across networks for different training regimes and architecture would have been nice and provided some indication of how general the results are. Very clear exposition, excellent figures. In areas was brief on motivation and model set up but two pages isnt a lot of space and they cite relevant literature. RNN Model vs experimental model (line 73)? Experimental model is the neural data? Calling it a model is confusing if it is neural data. Used RNN model which is also used in AI but authors never explicitly make connection to AI applications or otherwise. It would have been nice if they gave some indication or speculation of what in the networks changes to achieve different decays in autocorrelation which could then be matched to the natural time scale of AI tasks if prior knowledge was available. The clarity and simplicity of the measurements made in the study are very nice. It would have been nice if the study had gone a step past characterization and comparison to novel prediction and insight. Whats the function of short time scale units? What in the weights of networks creates different time scales and diversity of time scales? Over all the work is preliminary but of high quality and interest. Good work!""",4,1 neuroai19_50_2,"""In this study, the author(s) trained spiking RNNs with BPTT in delayed/instant tasks, then compared the autocorrelation of the trained units with data from macaque. Though the presented work have some technical and biological issues, I think this work is potentially interesting. The author(s) used the word intrinsic timescale and autocorrelation interchangeably, but they are different. In particular, even if the intrinsic timescales of the neurons are the same, depending on the connectivity structure, neurons develop different autocorrelation (see eg. R Chaudhuri et al., Neuron, 2015). Moreover, because the author used autocorrelation sigma, instead of the intrinsic synaptic decay constant tau_s, for the analysis, it is impossible to tell if the functional segregation shown in Fig. 3 is originated from optimization of w or tau_s. Thus, I believe further clarification is needed. experimental model is a bit misleading; experimental data is probably better. In this study, the author(s) trained RNN with a ML method (BPTT), then compared that with the experimental data. That, I think, is a nice intersection. As mentioned on the rigor section, the author(s) should disentangle the effect of w and tau_s, for instance, by comparing the performance of RNN with or without optimization of tau_s, or analysing the effect of tau_s more directly. Another potential issue is the biological meaning of the optimization of tau_s. For a given pair of neurotransmitter and receptor, the variability of the synaptic time constant is unlikely to be large. I think it is biologically more plausible to optimize AMPA/NMDA ratio, while fixing tau_AMPA and tau_NMDA at their typical values. """,4,1 neuroai19_50_3,"""The question of how networks maintain memory over long timescales is a longstanding and important one, and to my knowledge this question hasn't been thoroughly explored in spiking, trained recurrent neural networks (RNN). The importance is tempered by the findings only covering what is to be expected, and not pushing beyond this or describing a path to push beyond this. The work would benefit from more detailed discussion of the training algorithm that provides some indication that the results aren't unduly sensitive to these details. In particular, the setting of synaptic decay constants is an important detail in a paper about working memory. A short discussion of other training algorithms (such as surrogate gradient or surrogate loss methods) and why the given one was chosen instead would have been helpful. A comparison with Bellec et al. 2018, which looks at working memory tasks in spiking networks, would also have been appropriate. The statistical tools are fairly well described and appear to be well-suited for illustrating the phenomena of interest. I feel that more tools should have been used to further support or push the results. For instance, while the heatmaps in Figure 3 provide visual evidence for their claims (except see my comments below), the work could have benefitted from a quantification of this evidence. For instance, it is hard to see differences between the cue periods in the bottom two heatmaps, but differences may appear in some numerical measure of the average discriminability over these regions. The technical details are presented clearly on the whole. However, I feel that the work lacked clarity when it came to interpretation of the results. For instance, the claim of ""stronger cue-specific differences across the cue stimulus window"" between fast and slow intrinsic timescale neurons in the RNN model isn't clearly supported by the heatmap in Figure 3 -- the cue-specific differences for the short instrinsic timescale group to me appears to be at least as great as that of the long intrinsic timescale group within the cue stimulus window. I would be curious to know if making the input weaker or only giving it to a random subset of neurons makes this phenomenon more apparent. It seems that one of the main points of the work is that ""longer intrinsic timescales correspond to more stable coding"", but I didn't find that this point was made very convincingly. The work would have benefited from a discussion of the implications of longer intrinsic timescale neurons retaining task-relevant information for longer -- in particular, this finding feels a bit ""trivial"" without the case being made for why this should push understanding in the field. I think the interesting part may be in quantifying just how much of a difference there is between short and long timescale neurons -- for instance, does task-relevant information in both neuron groups fall off in a way that can be well predicted by their intrinsic time constants? How does this relate to their synaptic time constants? Does limiting the synaptic time constants limit the intrinsic time constants, and if so by how much? The same type of comments apply to the second part of the results, which demonstrates that a task that doesn't require working memory results in neurons with shorter intrinsic timescales compared to the working memory task. The authors use an artificial network model to shed light on the biological mechanisms enabling and shaping working memory in the brain. The paper in the process reveals some (expected) results about how spiking RNNs behave on a working memory task. The proof-of-concept work (among others) that this can be done with spiking RNN may inspire more work in this area. The work is a basic proof-of-concept of results that may not do much to advance understanding since they are what one would expect to see (i.e. the antithesis of their thesis seems very unlikely). Looking into the nuances of the explored phenomena may provide new information for the field. The paper should also seek to connect with more of the recent work being done in spiking recurrent neural networks.""",3,1 neuroai19_51_1,"""The submission discusses a few observations about the neocortex and emphasizes the importance of unsupervised learning of representations. However, the connections between these points are unclear, and it is therefore difficult to determine if any novel ideas are proposed. Because the submission does not elaborate on how neocortical principles could assist in improving unsupervised representation learning, the importance of this work seems lacking. The submission does not present any empirical results or theoretical formulations. Technical concepts are not explored in detail. The submission was, at times, difficult to follow. For instance, connections between the neuroscience discussion (e.g. cliques of pyramidal neurons) and better forms of representation learning are unclear. The submission discusses both fields. However, as mentioned, the details in connecting these areas are largely absent. I would challenge the authors to expand the discussion around Martinotti neurons to include a model of how such mechanisms could facilitate unsupervised / representation learning. The figure could be improved by converting the hand-drawn diagrams into more professional looking graphics. The introduction contains several overstated claims. For instance, its difficult to say whether the task of synaptic plasticity in the neocortex is to learn disentangled representations. """,2,1 neuroai19_51_2,"""The importance of understanding unsupervised learning in the brain cannot be understated. If we could emulate the unsupervised learning used by the brain in ANNs it would be a massive leap forward in AI. Thus, the goals of this submission are very important. However, this submission only gestures at potential solutions, so the importance of this specific contribution is more limited. This submission contains much speculation, and some discussion of known biological facts. But, there is no analytical or empirical demonstration that the biological mechanisms described actually would provide the sort of unsupervised learning proposed. Furthermore, the claim that such unsupervised mechanisms would prevent susceptibility to adversarial attacks is unconvincing, and not backed up by any data or math. It's a very well written submission. It is a perfect mix of neuroscience and AI. There are some great ideas in here, and potentially excellent topics for discussion. But, there is very little in terms of actual material contributions. It is essentially an opinion piece.""",3,1 neuroai19_51_3,"""The importance of the topic covered in this paper - namely, the role of unsupervised learning in biology and artificial intelligence - is very high. However, this point has been highlighted frequently in the past, and this paper stops short of offering any concrete or novel contributions. No concrete results or proposals are offered to solve the (important) problem of unsupervised learning in AI and the brain. Experimental findings regarding NMDA-mediated plasticity in cortex are briefly reviewed, but not connected back to this problem, providing little insight into how to solve it. Generally well-written. This topic surely should and will be discussed at this workshop. I strongly believe the topics covered in this paper should be discussed in this workshop. However, I do not believe this paper stands to contribute much to such a discussion, as it provides few novel insights or directions. One somewhat novel idea that this reviewer was able to walk away with was the suggestion that, in order to obtain good models of biological learning, we should focus on solving ecologically relevant statistical problems in AI.""",2,1 neuroai19_52_1,"""This work is important because it uses stochastic gradient descent (SGD) as an optimisation tool, rather than a 'learning model' as is so commonly done these days. The authors then argue that certain quantities in the model are robust to optimisation / optimal under certain constraints by converging to these parameters values for a wide range of initial conditions. Though the space search is still numerical, the search methodology (i.e. optimisation) is adequate for the claims made. An interesting step further would be to look at how 'sharp' posteriors over these parameters are (e.g. as in work like [Lueckmann, J. et al. Amortised inference for mechanistic models of neural dynamics; COSYNE2019]) Nice clear paper. This is really a theoretical neuroscience paper, I think. This work is interesting and has nice conclusions, though its relevance to this exact workshop maybe a little off. It does not specify any sort of ""idea cross pollination"" from AI<->neuroscience. Still I think many in the attending audience will find it interesting.""",4,1 neuroai19_52_2,"""ANNs based on the olfactory circuit, which uses dimensionality compression followed by expansion, have already provided performance on nearest-neighbor lookup comparable with modern hashing algorithms. Continued research into understanding why the olfactory circuit has evolved its unique architecture could provide key insight into designing a new class of biological-inspired neural networks. The findings in this paper appear strong, as they match experimental findings of PN-KC connectivity. I do have some questions about the model: - Do you see the same connectivity results if you include an APL component in the model? - Do you see the same results if you implement divisive normalization in the model? (Something like a softmax function between the ORN and PN) - Does each odor class have the same number of odors, so it assumes that the fly encounters all odors evenly, or are some odor classes rarer than others? This paper is well-written, I have some small questions: - Is the data used synthetic data, or is it based on a real dataset? (Hallem?) - How was the data partitioned into training/validation/test sets? - For the odor classification, what was the precision and recall across odors? - ""activities of different PNs are uncorrelated"", how was this determined? - At the end of a paragraph, ""The formation of glomeruli is minimally dependent on input noise"", what does this refer to? - Fig. 3: Does this include changes to the number of PNs and KCs? Or just ORs? This paper trains an artificial feed-forward neural network inspired by the architecture of the fly olfaction circuit. The authors perform experiments in silico and examine how the architecture of the model makes the network resistant to fluctuations in the weights. I think this paper is quite good, as provides a hypothesis on why the olfactory circuit has evolved its distinct architecture. I had some comments on what kind of data was used to train the model (real or synthetic), and also on if the same connectivity results persist if the model includes components such as divisive normalization and the APL.""",4,1 neuroai19_53_1,"""CNNs with dropout are proposed as a null model which seems to explain the response noise in neural data, but CNNs have no eye movements, while the neural data apparently does. deltaF/F - explain. For neuroAI symposium, better to define these terms. Details on convnet too sparse. Maxpooling layers? What was achieved performance on CIFAR-10 test set? I would liked to have seen a log-norm example class in Fig 1. The mixture of Gaussians depicted in Fig 1 (a), is one to consider <0.1 signal, or noise? Perhaps its no response plus noise? Do you have any baseline ""no stimulus"" epochs to quantify the level of baseline noise for each of the units considered? For Allen Institute data, and the head fixed mice, apparently eye movements were not paralyzed. This is an important source of variability which is not accounted for in the CNN model. i.e. for a given presentation of the stimulus, one has no idea if receptive fields are receiving remotely similar illumination. Perhaps a way to proceed to control for this would be to inject the Allen Institute recorded eye movements into the CNN model input stream. See rigor. While the analysis of the noise properties of neural responses in these datasets is important for the field of Neuroscience, descriptions of the datasets used and control for eye movements seems to be lacking. This work appears to offer little new insight for the AI field in its current preliminary form. AI->Neuro: ""We believe that research into the structure and role of biological noise will be useful for developing new methods to train neural networks with better generalization capabilities."" The inspiration appears to have propagated in the opposite direction, i.e. CNNs with dropout are proposed as a null model which seems to explain the observations in the data. ""Future work will study how different forms of subspace-aligned noise may help deep neural networks generalize better from fewer examples."" This is an interesting prospect of the work proposed in the abstract, but essentially left for future investigation. With this in mind, while the analysis of the noise properties of neural reponses in these datasets is important for the field of Neuroscience, descriptions of the datasets used and control for eye movements seems to be lacking. This work appears to offer little new insight for the AI field. """,2,1 neuroai19_53_2,"""Tries to argue based on which distributions fit best the noise measured in cortex that the networks are similar to ANNs with dropout. However, the link is tenuous and not strongly argued. It's a weak link to show that two distributions are similar, especially when only a very few distributions were fit and there are a lot of missing details about what is meant by a ""dropout-like"" distribution (early on it says ""see Methods"" but then isn't defined in the Methods). Quite difficult to follow the chain of reasoning here. In principle showing that dropout was like neuronal noise could be interesting, but it's pretty tenuous here. There are a number of claims that don't appear to be very well supported by what is actually shown.""",1,1 neuroai19_53_3,"""The authors suggest that noise corresponds to a regularization step that the brain does, like dropout. Single neuron noise could result in information from that neuron not propagating further down the network, like in dropout (but not equivalent for various reasons), but that would require more thoughtful comparisons with the neural activity, instead of just a comparison of distribution of activations. Controlling for state variables was a useful step to perform. Otherwise the comparisons are not well-quantified and other explanations of the noise/models are not explored. Figure 3 needs much more description, are the subspaces defined using trial-averaged responses? Why define a noise subspace instead of looking at the direction of the noise on each trial? Comparisons are too preliminary.""",2,1 neuroai19_54_1,"""This paper build a biologically inspired neural network from the moth olfactory system and used that as a feature extraction network to preprocess images before using other machine learning algorithms. This is an interesting idea. The authors identified key component for computation from the moth olfactory system. They showed using this network to preprocess image pixel data and generate features as input for other machine learning algorithm can increase performance. However, the preprocessing step is like adding more layers to a CNN (more parameters) and this comparison is not convincing enough to show the importance of the biologically inspired network. The paper is clearly written, but missing some critical control to show the advantage of this feature extraction network. Nevertheless, the biologically inspired network is a neat idea and has great potential. Good as preliminary, but need significant improvement.""",3,1 neuroai19_54_2,"""The relevance of this work was not at all clear. Not clear why they did what they did. The manuscript is readable but the logic is very unclear. Not clear at all what is BNN or how it is relevant. Some claim about faster learning is mentioned. Not sure why. """,1,1 neuroai19_54_3,"""The paper presents a biologically-insured model for classification, i.e., Cyborg. The idea of using computational principles in neuroscience to inspire machine learning/AI is an important research direction. I think this paper represents an interesting attempt along this direction. It could help inspiring future endeavors on this topic. The proposed method attach a previously proposed model MothNet, which is inspired based on the physiology of insect olfaction system to a ML classifier. The MothNet acts as a front-end feature generator. The authors compare their method to several baseline methods. They also tried other feature generators rather than MothNet. Overall, the authors found that Cyborg can achieve better performance on down-sampled, vectorized MNIST, Omnigplot. The techniques used are solid in general. I found the paper is well-written and relatively easy to follow., although the paper would improve if the motivations could be better articulated. Although the work heavily relies on MothNet, which has been proposed previously. However, the authors' serious effort to combine the ingredients from neuroscience and AI to come out with better method should be applauded. Overall, I think this is a quite interesting contribution showing some promise of integrating computational principles learned from neuroscience to ML. A few comments/suggestions. First, it would be great to see the method tested in more challenging datasets to see if the results generalized. Second, it would be helpful to gain some insights about why the performance improves. One possible idea- because there are several ingredients in the MothNet, one could keep a subset of these and see how that change the performance. """,4,1 neuroai19_55_1,"""Many of the arguments made for why sparse random features are useful in the brain (namely, section 4) have been previously discussed in the literature (namely, refs 20,44,45). The novel contribution of this paper seems to be the connection to additive kernel methods, but it was not clear what this added to the previous theory on sparse and random networks. It is critical that this be made explicit for this work to comprise a valuable contribution to this workshop. Within the four-page limit, no explicit demonstrations of their arguments from section 4 were made. However, extensive connections to previous literature in the machine learning and theoretical neuroscience literature are made, making their arguments relatively convincing. A critical piece missing is explicit connections to previous theory on this subject. For example, how do their scaling results on the number of neurons (namely, the ""claim"" in section 3) relate to those of reference 45? It seems like there are two separate ideas in this paper: 1) sparse random features are useful in feed-forward networks (section 4) 2) sparse random features implement additive kernels The novelty of this paper presumably lies in the connection between these two ideas. While each of these was clearly explained, their connection was not at all clear to this reviewer. It is possible this is due to limited knowledge of the kernel methods literature. This paper tries to link the theory of sparse random features and additive kernel functions to feed-foward networks in the brain. The introduction section elegantly reviews previous research on these topics, attempting to directly link experimental findings in the brain to the sparse random feature architecture motivated by machine learning theory and adopted in their proposed model. The overall subject of this paper seems important and novel enough to merit discussion at this workshop, but as it currently stands this paper does not clearly establish its contribution to this workshop. That said, I believe it is a simple matter of addressing the critiques made above. It is understandable that much of this could not be addressed within the four-page limit. In particular, it is my view that the connection to the kernel methods literature needs to be stated more clearly so as to be accessible to the more neuroscience-oriented audience of this workshop. The novel contribution of additive kernels to the theory of sparse random networks - a topic studied at length in the theoretical neuroscience literature - needs to be made explicit and clear.""",3,1 neuroai19_55_2,"""I am not completely sure how much value is added by the work presented here, relative to previous kernel work. However, this is still potentially interesting if framed within the context of prior literature. The work appears technically rigorous, albeit highly based on the previous literature. The ideas contained here are contextualized to their biological counterparts very well by the authors. Links between AI (sparse connectivity) and the nature of biological connectivity are drawn throughout. This was well done. I appreciate the specific examples detailed by the authors (mushroom bodies). The limitations of the approach with respect to the context of biological networks, i.e. neuromodulatory cells, etc. were presented, which is a helpful point to raise when comparing any artificial system to a biological one. The biological links presented in this work are interesting. The authors also do well in acknowledging the limitations of this model for biological application (for example that cell types and neuromodulator levels are an important feature of biological networks). The appendix was quite frankly, long - beyond what I assume the limitations of the conference organizers intended. Although this information was likely helpful in explaining the details of the method, I did not fully cover the entirety of the appendix.""",4,1 neuroai19_55_3,"""Many machine learning methods implicitly assume models where all features are allowed to interact with each other. As far as I can tell, the main contribution of this work is an approximation guarantee for kernels induced by sparse connectivity. The submission also discusses further advantages of such kernels in terms of previous literature. Although none of these contributions seem groundbreaking, I believe that the connections drawn are interesting and useful. I believe that this work is technically rigorous. I have not attempted to check the claim given in Section 3 as the proofs are well beyond the 4 page limit. The results in Section 4 seem to draw heavily on previously published work. I'm not totally convinced by the arguments in Section 4.2. If the regressor relies heavily on a handful of features and e is aligned with these features, e might still substantially change f, so I don't see any inherent reason that sparsity should lead to adversarial robustness. Moreover, the treatment here seems to cover sparse attacks (e.g. adversarial examples constructed with respect to L_1 or L_0 norms) but most literature on adversarial attacks treats adversarial examples constructed vs. L_ and L_2 norms, where it is unclear whether there is any advantage of sparsity. I think this paper is about as readable as it can be given the space restriction. I am not extremely familiar with literature on random features approximations or generalized additive models, but I still found the arguments here to be relatively easy to read and grasp intuitively. This article uses methods and results from the kernel approximation and generalized additive modeling literature to show benefits to sparsity. It is generally difficult to dissociate aspects of biological brains that serve important roles in terms of inductive bias or efficiency from aspects that arise from biological constraints. This kind of work plays a critical role here. This submission investigates potential advantages of sparsity, first showing that kernels induced by sparse features can be approximated with a number of random features that grows linearly in the input dimension, then connects sparse kernels to literature on additive modeling and suggests stability and scalability benefits. The implication is that the sparse nature of connectivity in the brain has advantages beyond merely satisfying biological constraints. Although not totally conclusive, I believe that this work has the potential to provoke interesting discussion. Strengths: The work seems to be mathematically rigorous and the conclusions drawn are interesting, if not totally conclusive. The submission is generally quite readable given the space restrictions. Weaknesses: Intuitively, I'm not sure the kernels discussed here will be universal kernels. I'm not totally convinced by the robustness arguments in Section 4.2. Although not strictly necessary, especially for a workshop submission, no experimental demonstration of the suggested advantages of sparse connectivity is provided. The Call for Papers for this conference states that submissions should not be more than 4 pages long including references and appendices. Although the main text of this submission is 4 pages long, it has 2 pages of references and a 9 page appendix. I did not read the appendix.""",4,1 neuroai19_56_1,"""Invariance to translation and other affine transformations is necessary for object recognition. In both vision neuroscience and deep learning the problem of invariance has been discussed extensively, yet there is no consensus on the computational basis of it in the brain and deep nets. This paper suggests that the rectification nonlinearity in deep networks underlies the emergence of invariance in these models. There are many simplifying assumptions which, even if reasonable, are hard to accept based on their presentations in the paper: a very simple one is the assumption of Gaussian distribution for both S and g(S). The authors certainly know that if S has a Gaussian distribution, g(S) doesn't necessarily follow the same distribution. For g(x) = max(0,x), g(S) would follow the same distribution, only for large mean values. In that case, you can simply assume that the nonlinearity has almost no effect on the distribution which is against the important role that the paper is attributing to the nonlinearity. Moreover, it seems that some of the simplifying assumptions has made the problem of invariance too simple; that is, the final conclusion is the effect of simplifying assumptions in the process: as an example, 1,1 = 2,2 = e* 1,2 . Based on this assumption, the second-order statistics of subunits do not change much with translation which simplifies the main problem significantly. An initial intuitive explanation of the idea behind the hypothesis in the paper would extremely help for following the mathematical arguments later on in the paper. It was quite difficult to follow the line of reasoning in the paper. The connection between the subsections was not clear enough; e.g. it was difficult (or maybe impossible) to understand how the materials presented in section 2.1 were used in the arguments in section 2.2. While reading the paper, it felt that there should be a simple intuition behind the hypothesis (the relationship between translation invariance and the variance of units), though without sufficient explanation, it was hard to grasp that. Also, in different parts of the paper, units and subunits' statistics were discussed interchangeably. It was almost impossible to distinguish these two, and it seemed that understanding the differences in the notations was key to comprehending the idea. The question is important to both neuroscience and AI. The hypothesis suggested in the paper could be applicable to both fields. The idea behind the paper is appealing, but more effort needs to be put into clarifying the reasonings. An intuitive explanation of the suggested relationship between variance and translational invariance can help. Also, the statistical deserve a better justification. The figures were helpful, but more needs to be said on their details, e.g. Figures 2B and 3B. """,2,1 neuroai19_56_2,"""Investigating how representational properties (invariance) might relate to generalization properties seems like an appealing research direction. However, I'm unconvinced that this particular approach to understanding this connection, via analyzing what happens when ReLU is applied to Gaussian-distributed data, is particularly informative. Additionally, the writing is unclear in many places and many assumptions are made with little theoretical or empirical justification. It's challenging to verify this work, since many assumptions are introduced and a lot of math seems to have been omitted for brevity, including a derivation of the ""critical insight"" that ""rectification simultaneously decreases response variance and correlation across responses to transformed stimuli."" But even if one were to assume that the conclusions are correct as stated, several assumptions in Section 2.2 are not a priori plausible to me and have neither heuristic nor empirical justification. It's not clear to me why we should assume that = 2, 2 = 1,2 that ""each subunit has the same covariance structure as another"" (or what this means mathematically), or that the eigenvectors are the same. Each of these assumptions seems quite strong. It's intuitively implausible that ReLU activation alone can explain the generalization properties of neural nets without any further assumptions, so these assumptions seem essential to the argument, but the submission provides no theoretical or empirical justification. The experimental results are somewhat more convincing, although the statement in the abstract ""deep nets naturally weight invariant units over sensitive units, and this can be strengthened with training"" seems to be somewhat contradicted by the experimental results, which show that the described effect is much more prominent at initialization than in trained networks. There are many issues with the clarity of this work. 1. Section 2 starts out by talking about ""subunits"" in the previous layer. This is not standard terminology for artifical NNs and it's unclear what it refers to. S_1, S_2 are introduced but then S_1 and S_2 are used in the equation below. I think S_1 is g(S_1) but I'm not sure. 2. In Section 2.1, S_1 and S_2 seem to go from being matrices to vectors and there seems to be an implicit assumption at S_1 and S_2 are identically distributed. 3. Although Figure 2A is clear, the actual relationship between variance and correlation is never described mathematically. Instead, the authors point to de la Rocha (2007). 4. Section 2.2 is hard to understand for many reasons. It's not clear the stated assumption that ""each subunit has the same covariance structure"" means mathematically. The assumption on the eigenvectors is written as an equation involving but the equation above seems to use u. I don't see how to get Figure 3A from the equation given. This work relates previous ideas regarding correlation between spike trains to properties of deep neural networks. Generally, the idea of linking representational properties (typically a stronger focus in neuroscience than ML) to inherent generalization properties of systems (of interest to both ML and neuroscience, but easier to study in ML) is an interesting area at the intersection between AI and neuroscience. This submission proposes suggests that neurons with higher variance tend to be more invariant to positional shifts because of properties of the ReLU activation, and that neural networks may thus naturally weight invariant units over less invariant units. The idea of investigating neural networks through this lens is potentially interesting, and the authors perform experiments on activations extracted from AlexNet to validate their hypothesis. However, much of this submission is difficult to follow, and in its current form, it is neither mathematically precise nor intuitively easy to grasp. Several simplifying assumptions are made in order to derive the conclusions, and it's not clear why these assumptions should be plausible or whether they approximate the empirical behavior of real neural networks.""",2,1 neuroai19_57_1,"""While Machine Learning has produced high-performing architectures (both manually and with architecture search methods), it is still unclear how basic connectivity patterns could emerge in natural and artificial systems. This paper addresses this question by starting from an all-encompassing, iteratively applied weight matrix which is reduced to consecutive operations by training on the task with a sparsity constraint. The visualizations and analyses are convincing. However, the method is not compared to alternative approaches such as RL-based or evolution-based architecture search (the most closely related paper is probably Pham et al. 2018, pseudo-url). It is thus hard to judge whether the approach shown here will find different networks than other methods or find them more efficiently. The paper is very well-written. Some minor things are confusing in the manuscript: 1. page 2, ""outpud"" should be ""output"" 2. page 4, ""embeded"" --> ""embedded"" 3. page 4, ""classificaiton"" --> ""classification"" 4. page 4, ""Anon [?]"" reference missing 5. I find ""Section 2"" clearer than ""2"" (and ""3"") Emergent structures are certainly a highly relevant topic in both Machine Learning and Neuroscience. For this workshop in particular, the paper is motivated by biological structural changes over a network's lifetime, but doesn't compare their method to any biological data. That said, I still think the work bears a very relevant approach and is developed with a neuro/bio perspective. The bio context might also allow this approach to further develop without having to chase SOTA. Questions 1. In Fig. 2, why can the URN not be collapsed to a single layer with I iterations, where the first d_in neurons always deliver input (and not just once; in e.g. vision the brain also receives continuous input), and the last d_out neurons always deliver output (neuro-analogy of responding early/adaptive inference)? This approach might lead to some changes with training (continuous ""image on""; computing loss at all times/only the last step), but seems like a more generic and more brain-like implementation. 2. How are c_W and c_N chosen? The text stated using ""hyperparameters which consistently lead to 100% test accuracy"" -- are c_W=5x10^-7 and c_N=2x10^-5 the only ones for which that worked? It would be cool to see which sparsity constraints lead to promising models and if those tell us anything about the sparsity in the brain. Suggestions 1. I only gave this a 4 and not a 5 because there is no baseline/comparison to other methods. The URN approach should be compared to existing architecture/weight search/evolution approaches to determine if/how it differs and to tell us which models are better than others. 2. The premise of this work for me is a more flexible search space than most architecture search approach which are restricted to the operations that are defined a priori. For instance, current architecture search techniques could not find local convolutions unless it's already part of the search space. It would be great to see if you can make the space even more flexible (cf. Q.1; maybe starting from an all-to-all connectivity matrix that already includes skip/recurrent connections by definition) and scale it up to the ImageNet dataset. The resulting model could then be tested on its performance and match to brain (using publicly available data, e.g. pseudo-url) 3. I realize this is particularly hard in this context, but is there any relevant developmental bio data that you could compare to? Perhaps developmental tracing studies which you compare the URN evolution to? """,4,1 neuroai19_57_2,"""In this work, the author(s) trained dense recurrent neural networks on simple classification tasks, then found that feedforward structures naturally emerge in the networks. I believe this line of work will provides insights on why the brain is filled with feedback and recurrent connections when feedforward DNN is sufficient for solving tasks that the brain faces, though the presented work is still a small number of empirical simulations. On ""this is necessarily true since..."" in p2: Although it is indeed trivially true that the number of layers is upper bounded by the number of iterations, it does not necessarily provide the lower bound. In particular, in the presence of recurrent connections, a shallow network with recursive working memory should be a valid solution too. The manuscript is clearly written, but this was achieved by using a very small font size. The axonal projection patterns in the brain is arguably regulated innately, and mostly fixed after the developmental period. Thus, the presented work is less relevant to the brain, compared to the traditional genetic algorithm-based approach. Still, the geometry-based regularization introduced in this work is a potentially interesting intersection with the neural architecture in the brain. I wonder why the intrinsic geometric structure of the CIFAR-10 images is not enough to induce the local connectivity. Also, it would be interesting to check, when the network is trained from the learned connectivity structure but with randomized weights, how fast the network reaches convergence compared to the original unstructured model. """,3,1 neuroai19_58_1,"""Thinking conceptually about the shortcomings of both current AI approaches, and models of brains is important work. What is often framed as a data analysis problem in neuroscience (""we now have all this great data, what do we do with it?"") is also a more conceptual problem. I agree with the author's view that moving beyond a static encoding/transmit/decode picture is helpful for both neuroscience and models of AI. Prompting discussion in this area is useful. I didn't find many large omissions in important details or misrepresentations of the literature. Two points would be: Can the list of shortcomings in AI be more explicitly linked to the list of properties of SNNs and how these would address them? There is a response that the brain doesnt have this problem... but it doesnt go on to say exactly how the list of SNN properties could help solve it in the AI case. Responding to the comment that ""The true computational power of the brain lies in the synergistic integration of all the principles of neural computation. To the authors knowledge, such an integration has never been attempted at scale."" It seems reasonable to not count the large-scale brain simulation projects (e.g. blue brain) here (as they do not include any behavior!). But I think Eliasmiths work on Spaun is an attempt to do something like what you're describing at a large scale. It doesnt include learning, but is otherwise a holistic attempt relevant for both neuro and AI. This is clearly written. I found no large points to improve. This is relevant to both communities to think about. A useful discussion piece. I think the main value of the piece for this workshop is in using the points made about spiking and dynamic neural nets as sources for models in AI. The piece could be more explicitly structured around this. I.e. start with the list of shortcomings in AI (currently in the last section), and then follow up with how features of biological neural networks and brains could solve these problems. Currently it's structured the other way around.""",4,1 neuroai19_58_2,"""The author(s) raise important known biological phenomena, largely centered around temporal dynamics, that are not well-incorporated into existing AI approaches. They argue that these dynamics will be critical for advancing AIs. This point is an important and valuable one. However the overall importance of the piece is somewhat reduced by the disjointed presentation and lack of explicit links between broad ideas into concrete examples. The paper provides a very wide overview of neural computation, and does not provide a formal or didactic mechanism to link all ideas and concepts together. The list of observed biological phenomena summarized in ""Principles of neural function and plasticity"" and following paragraph are not well-linked to form an argument with concrete evidence to my reading. While I do not necessarily disagree with the broad statements made, I find the style of argumentation lacking in sufficiently rigorous treatment, particularly for the sections on biological computations. Similarly, the sections on AI lack citations to support statements. While I understand they are commonly-observed phenomena, a more grounded treatment would improve the manuscript. Providing concrete examples of problems that would benefit from models incorporating dynamics, for instance, would greatly improve the impact and broaden the audience. As outlined above, I find the style of argumentation somewhat opaque for those outside the immediate area of study. More didactic explanation of how observed phenomena being cited link to the broad views of neural function would go a long way towards strengthening the presentation. A key strength of the manuscript is its treatment of questions relevant to the interface of neuroscience and AI. It is very topical for this workshop. An interesting perspective, but one that would have much broader impact and appeal if the argumentation were tightened and more concrete examples of next-steps in AI were provided. """,3,1 neuroai19_58_3,""" The paper provides a broadly useful synthesis of key differences between ANN and SNN approaches. However, the multiple grandiose statements, and some that are downright misleading left me puzzling what I learned. Its an opinion piece. It offers a call to action to do more comp-neuro, in that it could revolutionise AI. The paper opens ""In recent years we have made significant progress identifying computational principles that underlie neural function. While not yet complete, we have sufficient evidence that a synthesis of these ideas could result in an understanding of how neural computation emerges from a combination of innate dynamics and plasticity"" What follows is a useful survey of a selection of ideas, by far not complete. For example, many of the interactions between myriad excitatory and inhibitory types across brains regions and neuromodulators, of which dopamine is just one of several, is largely unknown. Arguably ACh and noradrenaline are more important for network states and dynamics, and equally important for plasticity as dopamine. The dynamics of neuromodulation is largely unknown. Which leads me to a few concerns. ""It is probable that revolutionary computational systems can be created in this way with only moderate expenditure of resources and effort"" Of course whole fields are working on this problem. Hardly what I'd call moderate effort. Claims of efficiency of more brain-like approaches compared to AI are disingenuous. A major draw-back of spiking models is that they are much more costly than ANNs, because of the small time-steps required. Sure neuromorphic systems are coming, but not definitely not with moderate expenditure of resources and effort"" While it covers important ground, I think the arguments need more refinement and focus before they can inspire productive discussion. Its more a series of statements than a cleverly woven argument. But the individual statements are sometimes seductive. For example ... ""A neuron simply sits and listens. When it hears an incoming pattern of spikes that matches a pattern it knows, it responds with a spike of its own. Thats it! Repeat this process recursively tens to trillions of times, and suddenly you have a brain controlling a body in the world or doing something else equally clever. Our challenge is to understand how this occurs. We require a new class of theories that dispose of the simplistic stimulus-driven encode/ transmit/decode doctrine. "" The devil is in the details, the ""how"" of ""suddenly"". I feel this statement: ""Our challenge is to understand how this occurs. We require a new class of theories that dispose of the simplistic stimulus-driven encode/ transmit/decode doctrine. "" Largely contradicts this one ""It is probable that revolutionary computational systems can be created in this way with only moderate expenditure of resources and effort"" I felt the paper could have done more to link with current state-of-the-art AI approaches. There was an absence of nuance. While it covers important ground, I think the arguments need more refinement and focus before they can inspire productive discussion.""",2,1 neuroai19_59_1,"""It's nice to be able to relate task complexity to a simple property of connectivity matrices, and to use this to analyse tasks and networks including creating connectivity for multi-task networks. Main issues are (1) it's a very special case and the tasks studied are for the moment very simple, but a lot of promise, (2) not much in the way of actual results, more of a method that could be generalised/applied more widely. Seems correct to me, but not enough space to have a lot of detail. Mostly easy to follow, a little heavy in parts (unavoidably perhaps). Good application of techniques from ML to a neuroscience problem. Really interesting approach, main limitations are that it's a fairly special case (which I don't find problematic) and that it's a bit preliminary / proof of principle.""",3,1 neuroai19_59_2,"""Lots of recent work, especially in neuroscience, has investigated the relationship between cell class, the dimension of the neural response, and the complexity of the task at hand. As it is difficult to investigate these relationships casually in a real brain, most work on this front has been rather speculative and observational. In probing these properties in artificial systems, the authors make important advances in our understanding of how cell classes and dimensionality underlie computation From what is presented in the paper, the work seems to be quite rigorous. I appreciate that the authors included the mean field equations for understanding the dynamics of a neural population with a single cell class, and describe roughly how they extend these equations to include multiple classes. The figures presented are consistent with the text, and provide support for their simulations and general scientific argument. This paper was very well-written. It was technical without getting bogged down in detail, and an intuitive description of their simulations and analyses was presented. The authors did a great job of providing motivation for each set of simulations/analyses. Each result was also well-summarized, and I finished reading this paper feeling like I learned something interesting. While this paper exclusively focused on simulation of artificial neural networks, the general idea and results speak directly to experiments and analyses performed within the experimental/systems neuroscience sphere. I believe that both AI and neuroscience communities will benefit from this work, thus meriting outstanding intersection. Overall, I thought this was a really great paper. A few comments for improvement: - There might be a slight discrepancy in how systems neuroscientists talk about 'cell classes' and how this paper does. Generally, cell classes are defined by their functional (or genetic/anatomical) properties, which may be related (or not) to their connectivity to other neurons. Nonetheless, I do think the current study (in investigating how many populations are functionally related to each other) is interesting - I just find the nomenclature, and how it relates to other literature using the same nomenclature, a bit confusing. I think it would be really fascinating to further nail down the link between functionally-defined cell classes (ie cell classes defined the systems neuro way), dynamics, and neural computation. - The case presented - specifically, the mixture of Gaussians for different cell classes - feels a bit specific. It is nice that there is previous work to build on this, and Gaussians are a great case to start with, but the choice isn't totally motivated and doesn't connect fully with the experimental literature. """,4,1 neuroai19_59_3,"""The authors identify a critical issue in the population dynamics view of systems neuroscience: namely, the lack of consideration of cell class. They aim to address this issue by introducing cell classes into rank-constrained recurrent neural networks. The authors make effective use of a mix of analytical and computational techniques. Constrained optimization over low-rank weight matrices recovers the low-dimensional structure expected from low-dimensional tasks. The use of well-known tasks in the experimental literature is compelling. The paper is clearly written. The ultimate interpretation of the results with respect to either neurobiology or neural networks is not entirely clear. (See full comments below.) The authors address ongoing questions in systems neuroscience and issues of interpretability in conventionally trained neural networks. This paper studies the intersection of several interesting problems in systems neuroscience and neural networks, working in the context of ongoing debates in neuroscience and introducing new techniques in neural networks. Both the rank-restriction and the Gaussian reconstruction could be used in more complex tasks. A sentence or two describing how the rank restriction was imposed, and how the Gaussians were reconstructed, would be welcome. While the technical exposition was clear, the interpretation with respect to biology and neural networks could be clarified. Biological cell type diversity spans many more parameters than those described by the covariance approach; while the introduction of covariance classes is itself an interesting step, the authors might include some speculation on how this definition of cell class enriches the neuroscientists understanding of population dynamics. Similarly, from a mathematical perspective, the authors could clarify why higher rank wouldn't achieve the same goal. """,4,1 neuroai19_60_1,"""Active sampling presents a framework for improving the efficiency of artificial neural networks in tasks requiring interaction between an agent and an environment. Approaches like this are important to try to reduce the training time needed and online processing requirements for artificial agents to make decisions in the real world. The authors present their interpretation of Gottlieb's three motives for implementing active sampling. They conclude that the Recurrent Attention Model implements the first of these three (increase expected reward), and propose objective functions that achieve the remaining two: 1) reducing the uncertainty of belief states, and 2) something related to the intrinsic utility or disutility of anticipating a positive or negative outcome. The quote from Gottlieb (2018) outlining these objectives leaves a lot to interpretation, but the authors present a reasonable method for reducing the uncertainty of belief states that has the bonus feature of providing control over the number of glimpses required to make a decision. However, I do not see how the belief in the sparsity of the output is related to the utility of making a prediction. Nevertheless, the authors show that their new objective function improves the convergence of the recurrent attention model Both new terms in isolation improving convergence individually, although the output sparsity objective appears to do most of the heavy lifting. They also show that by using their uncertainty measure to dynamically determine the number of glimpses they increase test accuracy, most of which seem to be accounted for by minimizing the uncertainty in belief rather than output sparsity. The motivation, method, and results of the paper were well written and easy to follow. The only difficulty I had was in reading figure 1, which was excessively small. This paper is very clearly at the intersection of neuroscience and artificial intelligence, using a well-defined theory from neuroscience to improve a popular model in the AI literature. Strengths: The additional objectives inspired by neuroscience make convergence faster on training accuracy and increase test accuracy on MNIST. They also provide a system to dynamically control the number of glimpses required to make a decision. Areas for improvement: I would like to see either a more convincing rationale for your sparsity objective, or an objective that directly addresses the intrinsic utility of an outcome. I appreciate that MNIST may not be the best task for this, and also that utility will be task- and agent-specific.""",3,1 neuroai19_60_2,"""The authors use insights from neuroscience to an important problem in artificial intelligence: the problem of active sampling. The paper is transparent and benchmarks its approach against existing approaches. The paper is conceptually easy to follow, although there are several minor spelling / grammar / typographical issues. The paper directly applies a conceptual approach from neuroscience to improve an existing, widely cited technique in active image sampling. This paper addresses an important problem in artificial intelligence, in the context of existing techniques, and uses neuroscience as inspiration to propose new techniques. Greater detail in the crucial paragraphs developing the concepts and computations of J-uncertainty and J-intrinsic would be helpful. In particular, why does the new cost term protect the RAM approach from performance degradation with higher numbers of glimpses? This problem is raised and appears to be addressed in the experimental results, but it is unclear why the insights from neuroscience help make this possible. In Figure 2, the both new terms, dynamic plot does not extend into the regime where performance degradation is most extreme; while this may be a result of the technique used to evaluate the dynamic case, it somewhat undercuts the claim that they dynamic case is roughly equivalent in performance.""",4,1 neuroai19_60_3,"""Training RAM networks faster and making their execution more flexible is potentially useful, but unfortunately I see the experiments provided as too weak to support this paper's approach. Unless I'm missing something, it looks like the authors fail to replicate the setup they are trying to extend. Fig. 1 shows a baseline training error rate of 20%. The original RAM paper reports a test error rate of 1%, and linear regression yield an error rate of around 9%, so to me this points to a bug. Confusingly, Fig. 2 shows a baseline training error rate of 5%. Since these are both inconsistent and so far from the original performance measurements, it makes interpreting the extensions' performance measurements very difficult. Also, since the authors refer often to the original paper but never mention this very large performance disparity, the omission seems borderline dishonest. At a high level, the writing is clear, but some of the technical parts could use a revision pass. e.g. the use of the word ""bound"" on line 112 is potentially confusion, as is the sentence about merging objectives in line 105. The loss names are also a bit unintuitive. This is relevant to the workshop, but it's more of a psychology-inspired approach to a solving machine vision problems than a bridge between ML and neuroscience, and it would fit in about as well e.g. in the main conference at CVPR as it would here.""",2,1 neuroai19_61_1,"""The authors address an important problem of developing a general, species-independent approach to quantifying animal vocalizations. Their approach is to use generative dimensionality reduction techniques to learn low-dimensional representations of vocalizations and use this to systematically interpolate between vocalization patterns to map out their perceptual organization in brains. This is a compelling idea, but the presented results shed little light on it. While the authors make some attempt to survey properties of a few dimensionality reduction techniques in Fig 1, this is not very clear, and the authors dont make a systematic attempt to compare dimensionality reduction techniques in the context of their novel approach, nor to identify key parameters or constraints for successful operation. The figures were confusing (e.g. it is not clear exactly what is depicted in Fig 1, nor what its main point is) and it was rather hard to follow the key results. It also, for example, wasnt clear whether the context was a binary or real-valued signal. There was little intersection between neuro and AI, besides using simply using machine learning algorithms to classify/generate behavioral signals. While the general idea of using generative latent variable methods to systematically explore behavioral space is compelling, the authors provide little evidence for most of their key claims. For example, in the introduction they say they will show that the method works across species and conditions, but then give only a birdsong example. Similarly, they claim single neuron responses vary continuously with interpolation point but show no data to support this. Overall, while the overarching idea is interesting, results feel very preliminary and weakly presented. """,2,0 neuroai19_61_2,"""The authors first propose to utilize generative modelling and dimension reduction technique to get a general low-dimensional representation of animal vocal spaces and then by sampling from the latent space, they systematically explore neural and perceptual representations of biologically relevant acoustic spaces with complex features. The direction is interesting but current results are primitive and not enough. Although the authors claim they implement and explore a number of models to produce a series of latent representations, only the result of VAE is reported and there is no explanation why VAE is preferred to others. The author also claim their method is successful in different species but only songbirds result is provided. The figure 1 is confusing and hard to follow, it seems like A-G and H-N are two different examples from two datasets but in paper, A-H are from a dataset and H-N are from the other dataset. Figure 4 is filled with a lot of blocks but explained with very few words. The paper doesn't really relate to real neurons but mainly focus on utilizing artificial neural network techniques on animal behavior. The idea of this paper is potentially interesting but the results are primitive and need more rigorous analyses. """,2,0 neuroai19_61_3,"""This work addresses how VAE could help to model and characterize bird song generation. The authors propose to use a VAE to learn a low-dimensional representation of bird songs. Interpolations in the low-dimensional representations between stimuli are then used in classifications tasks to probe the perception of boundaries between stimuli and (in the future) the corresponding neural representations. The paper seems to be an innovative research agenda rather than a finalized project. The proposed model class is rather standard in the machine learning community but seems novel for the specific task of animal vocalization. While the results are rather preliminary, the proposed experimental of inferring low-dimensional representations of bird song and using the for behavioral experiments in combination with neural recordings seems innovative. Machine learning generally and VAE specifically seems mostly to be used as a fitting tool, not as a model of the neural circuit. The scientific questions are clearly explained, the methodology clear, but the results seem rather vague and preliminary. The paper uses machine learning generally and VAE specifically mostly as a fitting toolbox, not as a model of the neural circuit. While the project might be innovative on the neuroscience community, especially for sequence generation in animal vocalization, the relevance of the project for the machine learning community might be rather limited. The project is innovative and promising. At the current stage, the results seem to be rather preliminary, but the project might still be a good candidate for the workshop because the research design is innovative and potentially disruptive, so it might spark a good discussion.""",3,0 neuroai19_62_1,"""This is a clever basic idea. As far as I can tell the basic underlying mechanism is similar to the single-cell competitive STDP process proposed by Song, Miller and Abbott (2000), but at a circuit level, and loosely mapped on to cortical hierarchy. This aspect I believe is novel and interesting. Overall the results were pretty minimal and could have been expanded to strengthen the authors' case. The results are something like ""proof by example"". It would be more convincing to me if the authors could explore the robustness and generality of this mechanism. How does it depend on parameter choices? What regimes will it have strongest effect and when will it break? What is the role of the separate L2/3 and L4 networks? Although they mimic the cortical anatomy, what computational functions do they perform here? The methods section could be more elaborate to aid reproducibility. For example the STDP model is not described at all, and it is known that the implementation details can affect competition (additive vs multiplicative weight changes being one example). Also STDP simulations are notoriously sensitive to parameter choices. Overall it is well written and the figures are clear. However neither of the two conclusions the authors make are clear to me: ""First, the divergence of synaptic connections strengths grows bigger with nonspecific feedback inputs, which can increase SNNs learning capability. Second, more synapses can be trained in parallel, which can shorten the training times of SNNs."" Elaboration or The study is straight-ahead computational neuroscience. Although the text mentions potential applications to machine learning, neuromorphic computing, and deep learning, ML-style models are not implemented. It may be that mapping these mechanisms onto ANNs is not straightforward. The two conclusions mentioned As mentioned in my response to the technical rigor section, this study could be greatly improved by further simulations exploring parameter sensitivity, STDP rule choices, and network architecture choices. The basic idea has merit but needs to be better explored.""",3,0 neuroai19_62_2,"""The paper provides circuitry level simulation setup of STDP process for hierarchical network structure as observed in brain which is an interesting idea and could go long way with systematic exploration. The result presented is nice but could have been expanded with more analysis driven by different variations to the model parameters/input regimes/network structure to better understand the mechanisms in actions. As of now the results presented seem incomplete to fully support the authors' conclusions. Overall the paper is well written and easy to follow The concepts discussed in the paper seem to have much stronger association with computational neuroscience rather than ML. Although author does mention potential applications to machine learning but it is unclear how the mechanism presented in the paper could be implemented into artificial neural nets. The ideas and results presented in the paper are novel but as of now there doesn't seem to be enough analysis to fully support the author's claims. As mentioned in 'technical rigor' section, the paper could be greatly improved with further exploration of the model.""",3,0 neuroai19_62_3,"""Albeit similar models have been explored previously, investigating the role of top-down feedback in learning could be important for understanding learning in biological neural circuitry. The initial experiments provided hint at possible roles of top-down feedback in learning, however, more evidence is necessary to make significant conclusions. Neither theoretical nor intuitive justification is provided for what is observed. In particular, the model considered is just one of many possible ones (there could be inhibitory feedback, too, for instance, and for a more realistic setting, the feedback connections should possibly be plastic, as well.) It is not obvious to me how the results shown in fig. 3 come about. It seems like the blue curve is exactly the same in both cases, while the orange curve is just constant (zero) in the case without feedback. Since STDP normally leads to weight changes even without feedback, something must be unusual here. The manuscript is easy to follow. The machine learning relevance of the proposed approach is not obvious. I believe this idea needs a more rigorous evaluation and better motivation. A simple search for ""stdp with top down feedback"" or similar turns up a multitude of similar models; the authors should clarify what's their contribution.""",2,0