AMSR / conferences_raw /neuroai19 /neuroai19_B1lj77F88B.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
10.6 kB
{"forum": "B1lj77F88B", "submission_url": "https://openreview.net/forum?id=B1lj77F88B", "submission_content": {"TL;DR": "We present an open-loop brain-machine interface whose performance is unconstrained to the traditionally used bag-of-words approach.", "keywords": ["Brain-Computer interface", "Speech decoding", "Neural signal processing"], "pdf": "/pdf/9801301a950b2cd298ffabfe7bb6c6127a522b92.pdf", "authors": ["Janaki Sheth", "Ariel Tankus", "Michelle Tran", "Nader Pouratian", "Itzhak Fried", "William Speier"], "title": "Translating neural signals to text using a Brain-Computer Interface", "abstract": "Brain-Computer Interfaces (BCI) may help patients with faltering communication abilities due to neurodegenerative diseases produce text or speech by direct neural processing. However, their practical realization has proven difficult due to limitations in speed, accuracy, and generalizability of existing interfaces. To this end, we aim to create a BCI that decodes text directly from neural signals. We implement a framework that initially isolates frequency bands in the input signal encapsulating differential information regarding production of various phonemic classes. These bands form a feature set that feeds into an LSTM which discerns at each time point probability distributions across all phonemes uttered by a subject. Finally, a particle filtering algorithm temporally smooths these probabilities incorporating prior knowledge of the English language to output text corresponding to the decoded word. Further, in producing an output, we abstain from constraining the reconstructed word to be from a given bag-of-words, unlike previous studies. The empirical success of our proposed approach, offers promise for the employment of such an interface by patients in unfettered, naturalistic environments.", "authorids": ["janaki.sheth@physics.ucla.edu", "arielta@post.tau.ac.il", "metran@mednet.ucla.edu", "npouratian@mednet.ucla.edu", "ifried@mednet.ucla.edu", "speier@ucla.edu"], "paperhash": "sheth|translating_neural_signals_to_text_using_a_braincomputer_interface"}, "submission_cdate": 1568211746852, "submission_tcdate": 1568211746852, "submission_tmdate": 1572129306391, "submission_ddate": null, "review_id": ["rJldpDc8vB", "HklLYBtFwS", "HkeDvrZjwB"], "review_url": ["https://openreview.net/forum?id=B1lj77F88B&noteId=rJldpDc8vB", "https://openreview.net/forum?id=B1lj77F88B&noteId=HklLYBtFwS", "https://openreview.net/forum?id=B1lj77F88B&noteId=HkeDvrZjwB"], "review_cdate": [1569265599706, 1569457534156, 1569555807343], "review_tcdate": [1569265599706, 1569457534156, 1569555807343], "review_tmdate": [1570047562309, 1570047552591, 1570047538314], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper15/AnonReviewer1"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper15/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper15/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1lj77F88B", "B1lj77F88B", "B1lj77F88B"], "review_content": [{"evaluation": "3: Good", "intersection": "5: Outstanding", "importance_comment": "Decoding speech (or intended speech) from neural signals is an important problem with wide potential clinical applications. This study moves towards a more naturalistic setting by decoding a larger set of words than what was previously attemped. This is an important step towards a useful device.", "clarity": "4: Well-written", "technical_rigor": "2: Marginally convincing", "intersection_comment": "This is an excellent example of an AI method applied to an important neuroscience problem.", "rigor_comment": "The main issue I have with the paper lies in the description of the experiment. It is not clear what constitutes a \"trial\". What are the lengths of the time bins after computing the spectral power? This is presumably coarser than 30 kHz. What data are used for training versus testing, and are these random bins or non-interleaved segments of the total timeseries? From the description of the loss as cross-entropy, it sounds like the output target for the LSTM is phoneme identity. What is the total number of phonemes? It is unclear whether the language model built on top of this is actually trained on the subject data or the phoneme to word map is just a result of the Brown corpus + CMU dictionary. This needs to be made clear.\n\nSimilarly, it sounds like the goal of the study is to decode words from a very large corpus. However, the experimental design section states that the subjects only performed trials where they said \"yes\", \"no\", or phoneme strings which presumably don't map to real words. So what is the part where you try and decode their actual speech? Is it from observations of the rest of their hospital stay while they are speaking with family, friends, doctors, and nurses? Is this what's tested after only training on the phoneme string data? This should all be made explicit.\n\nI found the level of detail in the LSTM section more than necessary, and the explicit parameters of the ADAM method and learning rate could be left out. (As is, the level of detail in this paper is not enough to make the study reproducible, but I am glad to see the code will be made available. With the code, you don't need these parameters in the text.) Instead, I would use the space gained to explain more the basic experimental setup and algorithm design.\n\nOn the other hand, there could be more detail about the smoothing + particle filter steps. Is the automaton model a Markov chain? Figure 1 doesn't add much in my opinion. You could add a lot more information by creating a diagram of how phoneme + automaton gives the word output. Adding some math like the PF update equations could make everything more precise.", "comment": "This is a nice paper as is, and will be better if the authors can address my issues with the technical clarity. With some more effort given to formatting (like removing double-spacing of bibliography), there should be space for these improvements.\n\nI would like to hear what the authors think about decoding speech, where the subject actually says a word, versus decoding intended speech, where the subject only imagines saying a word. Intended speech sounds more difficult to me, since there won't be any motor signals. This is an important challenge to overcome for someone with motor impairment. Is there any way to interpret the signals that this framework learns? Can you tell which electrodes are most relevant for the decoding task, and are these in speech or motor areas? This is probably something that could go into the discussion.", "importance": "4: Very important", "title": "Important BCI decoding topic; needs more explanation of study basics", "category": "AI->Neuro", "clarity_comment": "The paper is generally well-written with correct grammar and good explanations. My main issues are with the technical clarity (see technical comments).\n\nSmall edits:\n* L. 1, I'd add \"may\" before help\n* L. 2, could strike \"output\" from \"speech output\"\n* L. 33, \"the speech cortex\" -> \"speech cortex\"\n* Ll. 39-42, this last sentence is a run-on. Also, it is missing a comma before \"i.e.\"\n* L. 48, use backticks `` for open quotes in Latex\n* L. 60, missing a comma \"Across multiple subject, vowels...\"\n* L. 63, add \"The\" to start of 1st sentence\n* L. 64, \"Futher\" -> \"Furthermore\"\n* Figure 1, typo \"nodes\""}, {"title": "BCI to read text from neural signal", "importance": "4: Very important", "importance_comment": "This is a very important question. If the accuracy is high, an algorithm can increase the life quality of disabled or stroke patients. ", "rigor_comment": "The authors used a set a standard RNN (LSTM) network to use neural data (LFP) for word detection. It is a bit unclear how they select their features. It is also unclear how their algorithm perform (37% accuracy) compared to other algorithms. It is thus hard to judge the significance of result.", "clarity_comment": "The text is very brief in terms of experimental detail and methods. It is challenging to figure out what exactly the algorithm did and the advantage of this particular algorithm based on the results written.", "clarity": "2: Can get the general idea", "evaluation": "2: Poor", "intersection_comment": "The authors tried to use RNN to encode LFP signal for speech recognition.", "intersection": "3: Medium", "technical_rigor": "1: Not convincing", "category": "AI->Neuro"}, {"title": "Important problem, interesting results for now ", "importance": "4: Very important", "importance_comment": "The goal behind the paper is very useful: being able to decode complex speech and not just choose an option from a limited pool. A solution which allows patients to communicate easily and rapidly would greatly improve their quality of life. Using a language model or any kind of recurrent model to use past brain activity to predict more accurately is a crucial direction and the authors are correct in pursuing it. ", "rigor_comment": "The experiments appear convincing. The limited space doesn't allow a very deep understanding of all the procedures that were used. ", "clarity_comment": "The paper is well written.", "clarity": "4: Well-written", "evaluation": "4: Very good", "intersection_comment": "This work is an example of using AI tools to decode brain activity and isn't about using AI as a model of what the brain is doing. It is a nice illustration of the role that AI could achieve in that domain.", "intersection": "4: High", "comment": "This is an interesting and promising approach. The problem that is addressed is important and the methods seem sound. However, the authors limit their analysis to very simple stimuli (yes and no and some non-word sounds) which reduces the impact of their work (they motivate their approach as not being constrained to decoding out of a small pool.", "technical_rigor": "4: Very convincing", "category": "AI->Neuro"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"}