{"forum": "SJxoX7K8LS", "submission_url": "https://openreview.net/forum?id=SJxoX7K8LS", "submission_content": {"TL;DR": "Unsupervised analysis of data recorded from the peripheral nervous system denoises and categorises signals.", "keywords": ["Machine Learning", "Peripheral Nervous System", "Convolutional Neural Networks", "Auto-encoder", "Signal Processing"], "authors": ["Thomas J Hardcastle", "Susannah Lee", "Lorenz Wernisch", "Pascal Fortier-Poisson", "Sudha Shunmugam", "Kalon Hewage", "Tris Edwards", "Oliver Armitage", "Emil Hewage"], "title": "Coordinate-VAE: Unsupervised clustering and de-noising of peripheral nervous system data", "abstract": "The peripheral nervous system represents the input/output system for the brain. Cuff electrodes implanted on the peripheral nervous system allow observation and control over this system, however, the data produced by these electrodes have a low signal-to-noise ratio and a complex signal content. In this paper, we consider the analysis of neural data recorded from the vagus nerve in animal models, and develop an unsupervised learner based on convolutional neural networks that is able to simultaneously de-noise and cluster regions of the data by signal content.", "authorids": ["thomas@cbas.global", "susie@cbas.global", "lorenz@bios.health", "pascal@bios.health", "sudha@cbas.global", "kalon@cbas.global", "tris@cbas.global", "oliver@cbas.global", "emil@cbas.global"], "pdf": "/pdf/27f296539e8906c55b5027030cb624f2495d4e7d.pdf", "paperhash": "hardcastle|coordinatevae_unsupervised_clustering_and_denoising_of_peripheral_nervous_system_data"}, "submission_cdate": 1568211747274, "submission_tcdate": 1568211747274, "submission_tmdate": 1572475195207, "submission_ddate": null, "review_id": ["B1eIAXOYDH", "HklTyQXsvS"], "review_url": ["https://openreview.net/forum?id=SJxoX7K8LS¬eId=B1eIAXOYDH", "https://openreview.net/forum?id=SJxoX7K8LS¬eId=HklTyQXsvS"], "review_cdate": [1569453005752, 1569563365170], "review_tcdate": [1569453005752, 1569563365170], "review_tmdate": [1570047553000, 1570047536514], "review_readers": [["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper16/AnonReviewer1"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper16/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["SJxoX7K8LS", "SJxoX7K8LS"], "review_content": [{"title": "Unsupervised VAEs applied to the LFP, potentially compelling but needs more detail in the results", "importance": "4: Very important", "importance_comment": "Recent advancements in technology are making it much easier to perform the large-scale recordings of neuronal activity. This \"big-data\", which can consist of thousands of hours of electrophysiological signals, which can be difficult or otherwise impractical to analyze manually. Developing unsupervised methods that can reduce the dimensionality of this data and cluster together similar states is of critical importance.", "rigor_comment": "There doesn't seem to be any measure or statistics about how much the signal was denoised, besides what is displayed in the figures. Would it be possible to create a synthetic dataset with noise, and report how much the model improves the SNR under increasing levels of noise? Or even adding some noise to the original LFP signal. Additionally, there isn't much info given in regards to clustering. Fig. 3 shows the original output (with human labels) at the top and the reconstruction with labels at the bottom. Are the bottom labels based on some sort of unsupervised clustering of data in the latent dimension? If so, how was the number of clusters determined?\n\nI also have some concerns about the amount of training/testing data and number of subjects. The model was trained on one hour of data from a single subject and then evaluated in a second subject. How much data was used from the second subject? Is it more than the 100 seconds shown in Fig. 3? I think any claims about generalizability would require the use of some sort of statistical measure of performance, followed by evaluations in multiple subjects.\n\nFor the VAE, is it possible to train with a smaller latent space? If you reduced the size to a small value (like 2 or 3), could you plot the data and see defined clusters representing respiration states? Also, how is the number of time-coordinates (n) determined, and how does performance change as you change n?", "clarity_comment": "The paper is nicely written for the most part. For the figures, it would be nice to see a zoomed-in comparison between the input signal and reconstruction at a shorter timescale (like 100 ms). For the methods, it says the loss function was MSE and the negative of the KL divergence. Could you clarify what you mean by this? Also, Reference 11 doesn't seem to match its citation.", "clarity": "4: Well-written", "evaluation": "2: Poor", "intersection_comment": "Research into using unsupervised methods to analyze electrophysiological signals is an excellent example of the applying AI to neuroscience, but it's difficult to evaluate this paper without any additional details on the VAEs performance.", "intersection": "4: High", "technical_rigor": "1: Not convincing", "category": "AI->Neuro"}, {"title": "Promising direction, but key aspects missing.", "importance": "3: Important", "importance_comment": "The authors consider a specific VAE, i.e. one with additional information going from the input to the output, i.e. the identity of subsampled low-value-time points in the data. Althgouh this approach appears to categorize the data using a small number of latents. It is unclear which parts of the data enable this to work. Simulation results are missing. Some details in the methods are unclear. No intuition is provided as to why the coordinate encoder worked!", "rigor_comment": "The authors do not mention if their latent space grows with n? Moreover, with the addition of the coordinate encoding, the latent space should be 40 instead of 20, if I understand correctly.\n\nThe denoising is interesting, but needs to be compared with simpler methods like low pass filtering and penalized matrix decomposition.\n\nWhich parts of the approach are actually important to this technique? The authors have only shown that the coordinate encoder works - which parts of this are important? More importantly, why? Why does this technique work? \n\nSimulation results would be very helpful.", "clarity_comment": "The manuscript is fairly clear, apart from the comments mentioned above. However, when it comes to the method of inputting the coordinate vector, it is quite unclear, even though this is the novelty of the method. It would be good to give an example or provide a figure of the procedure.", "clarity": "3: Average readability", "evaluation": "3: Good", "intersection_comment": "Interesting addition to a common AI model, to denoise neural data.", "intersection": "3: Medium", "comment": "Definitely needs more work to be able to tell which parts of the model are important for the efficient representation. Promising direction.", "technical_rigor": "3: Convincing", "category": "AI->Neuro"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"}