AMSR / conferences_raw /neuroai19 /neuroai19_BkxsVXtLUr.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
9.81 kB
{"forum": "BkxsVXtLUr", "submission_url": "https://openreview.net/forum?id=BkxsVXtLUr", "submission_content": {"TL;DR": "We extend bilinear sparse coding and leverage video sequences to learn dynamic filters.", "keywords": ["Unsupervised Learning", "Spatio-Temporal Features", "Sparse Coding", "Equivariance", "Capsules"], "pdf": "/pdf/46dac219ebe9800896ac2387a695f2e5129d1faa.pdf", "authors": ["Dimitrios C. Gklezakos", "Rajesh P. N. Rao"], "title": "Learning a Convolutional Bilinear Sparse Code for Natural Videos", "abstract": "In contrast to the monolithic deep architectures used in deep learning today for computer vision, the visual cortex processes retinal images via two functionally distinct but interconnected networks: the ventral pathway for processing object-related information and the dorsal pathway for processing motion and transformations. Inspired by this cortical division of labor and properties of the magno- and parvocellular systems, we explore an unsupervised approach to feature learning that jointly learns object features and their transformations from natural videos. We propose a new convolutional bilinear sparse coding model that (1) allows independent feature transformations and (2) is capable of processing large images. Our learning procedure leverages smooth motion in natural videos. Our results show that our model can learn groups of features and their transformations directly from natural videos in a completely unsupervised manner. The learned \"dynamic filters\" exhibit certain equivariance properties, resemble cortical spatiotemporal filters, and capture the statistics of transitions between video frames. Our model can be viewed as one of the first approaches to demonstrate unsupervised learning of primary \"capsules\" (proposed by Hinton and colleagues for supervised learning) and has strong connections to the Lie group approach to visual perception.", "authorids": ["gklezd@cs.washington.edu", "rao@cs.washington.edu"], "paperhash": "gklezakos|learning_a_convolutional_bilinear_sparse_code_for_natural_videos"}, "submission_cdate": 1568211762538, "submission_tcdate": 1568211762538, "submission_tmdate": 1572589655835, "submission_ddate": null, "review_id": ["r1xUIDI9vH", "SJxFBdUqwH", "BJgyqy4iDr"], "review_url": ["https://openreview.net/forum?id=BkxsVXtLUr&noteId=r1xUIDI9vH", "https://openreview.net/forum?id=BkxsVXtLUr&noteId=SJxFBdUqwH", "https://openreview.net/forum?id=BkxsVXtLUr&noteId=BJgyqy4iDr"], "review_cdate": [1569511245876, 1569511488692, 1569566598835], "review_tcdate": [1569511245876, 1569511488692, 1569566598835], "review_tmdate": [1570047547357, 1570047547115, 1570047535872], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper53/AnonReviewer2"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper53/AnonReviewer1"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper53/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["BkxsVXtLUr", "BkxsVXtLUr", "BkxsVXtLUr"], "review_content": [{"title": "Interesting connection between bilinear models and neuro but no quantitative analysis of model.", "importance": "2: Marginally important", "importance_comment": "They make modifications to an existing generative model of natural images. They do not make direct comparisons to previous models or study quantitatively the results of the model with respect to its parameters. It is difficult to judge whether the new model is important because it has not been evaluated except by eye it does seem to reconstruct an image.", "rigor_comment": "They show images of a single reconstruction but no quantification of reconstruction quality or comparison to previous methods. In the spirit of insight it would have been very nice to have a quantification of error with respect to parameters (priors on slow identity, fast form). If it had been evaluated and its efficacy varied in an interesting way with respect to the parameters of the model this could be a potentially important model to understand why the nervous system trades off between object identity associated features, transformation features, and speed.\n\nThe statement that: \u2018GAN\u2019s and VAE features are not typically interpretable.\u2019 Seemed broad and was unsupported by any citations and to my knowledge GAN\u2019s and VAE\u2019s have been used specifically to find interpretable features.\n\n\n", "clarity_comment": "Paper was organized, figures clear and readable. Some development of the model could have been left to the references and didn't add much to their contribution (e.g. Taylor approximation to a Lie model) .\n\nWhen they say \u2018steerable\u2019 filter I was a little confused, do they just mean the basis vectors learned vary smoothly with respect to some affine transform parameter? \n\nTheir statement of the novelty of their method: \u2018(1) allowing each feature to have its own transformation\u2019 was not clear. Does this mean previous methods learned the same transformation for all features. \n", "clarity": "3: Average readability", "evaluation": "2: Poor", "intersection_comment": "They make an interesting connection to speed of processing that rapid changes better represented by the magnocellular pathway would be associated with transformations and slow parvo with identity. It was not clear though where they experimentally varied/tested this prior in their algorithm. So while an interesting connection they did not make clear where they substantively pursue it. \n\nThey draw an analogy between the ventral and dorsal stream of cortex and bilinear models of images.", "intersection": "3: Medium", "comment": "The main place to improve is to have some quantitative analysis of the quality of their model perhaps MSE of image reconstruction. Then this evaluation could be used to study impacts of the parameters of their model which could then lead to neural hypotheses.\n\nThey have some qualitative evaluation in images of filters but they could explore the parameter space to understand what led to these features. \n\nOne of their stated novel contribution was that their filters were convolutional but they do not discuss the potential connection convolutional filters have to transformation of features which seemed like a gap. Weight sharing across shifted filters separates out feature and position yet many of their learned transformations are also translations. Is this an issue of spatial scale? This warranted some potentially interesting discussion though admittedly 4 pages isn\u2019t a lot of space.", "technical_rigor": "1: Not convincing", "category": "Neuro->AI"}, {"title": "Bilinear sparse coding model with dynamic features, but unclear if dynamics work", "importance": "2: Marginally important", "importance_comment": "Interesting extension to bilinear sparse coding models, but there is insufficient evidence in the work to support the claims in the abstracts - particularly that it captures the statistics of the transformations between frames.", "rigor_comment": "There are no quantifications of the performance of the model particularly in comparison to the original model that they are extending. The reconstruction of a single image in Fig1e is not a convincing test of the model - one would want to see how well the feature dynamics predict the next frame, if they are indeed sufficient to capture changes in the videos frame by frame.", "clarity_comment": "Figure legends/descriptions are too short, not totally clear what is shown in Figure 2.", "clarity": "3: Average readability", "evaluation": "2: Poor", "intersection_comment": "Unsupervised approaches might be interesting to some in the AI community.", "intersection": "3: Medium", "technical_rigor": "2: Marginally convincing", "category": "Common question to both AI & Neuro"}, {"title": "review", "importance": "3: Important", "importance_comment": "This paper continues a line of work from the 2000s that has not had significant recent interest. I am glad it is getting tried with modern compute scale and tools, and I believe the results are promising. This submission is not sufficient on its own to convince me though that this approach will tell us new things about the brain or about artificial neural networks.", "rigor_comment": "The algorithm was presented very clearly, and I believe all claims to be correct. I was surprised that x was set by projection rather than inference, and would have liked better understanding for why this was effective or desirable (though this may not be possible w/in length constraints).", "clarity_comment": "The writing was very good, and the algorithm and results were very clearly presented, especially considering length constraints.\n", "clarity": "4: Well-written", "evaluation": "4: Very good", "intersection_comment": "The paper presented an unsupervised machine learning algorithm, which was used to try to describe representation learning in the brain.\n", "intersection": "4: High", "comment": "This was a sensible algorithm for unsupervised feature learning, algorithm and results were clear, and results were reasonably good.\n", "technical_rigor": "4: Very convincing", "category": "AI->Neuro"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"}