<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Ricci, Erin</style></author><author><style face="normal" font="default" size="100%">Haider, Sameah</style></author><author><style face="normal" font="default" size="100%">McCane, Lynn M</style></author><author><style face="normal" font="default" size="100%">Susan M Heckman</style></author><author><style face="normal" font="default" size="100%">Jonathan Wolpaw</style></author><author><style face="normal" font="default" size="100%">Theresa M Vaughan</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A practical, intuitive brain-computer interface for communicating 'yes' or 'no' by listening.</style></title><secondary-title><style face="normal" font="default" size="100%">J Neural Eng</style></secondary-title><alt-title><style face="normal" font="default" size="100%">J Neural Eng</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Adult</style></keyword><keyword><style  face="normal" font="default" size="100%">Aged</style></keyword><keyword><style  face="normal" font="default" size="100%">Algorithms</style></keyword><keyword><style  face="normal" font="default" size="100%">Auditory Perception</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interfaces</style></keyword><keyword><style  face="normal" font="default" size="100%">Communication Aids for Disabled</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">Equipment Design</style></keyword><keyword><style  face="normal" font="default" size="100%">Equipment Failure Analysis</style></keyword><keyword><style  face="normal" font="default" size="100%">Female</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Male</style></keyword><keyword><style  face="normal" font="default" size="100%">Man-Machine Systems</style></keyword><keyword><style  face="normal" font="default" size="100%">Middle Aged</style></keyword><keyword><style  face="normal" font="default" size="100%">Quadriplegia</style></keyword><keyword><style  face="normal" font="default" size="100%">Treatment Outcome</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">06/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/24838278</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">11</style></volume><pages><style face="normal" font="default" size="100%">035003</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">OBJECTIVE:
Previous work has shown that it is possible to build an EEG-based binary brain-computer interface system (BCI) driven purely by shifts of attention to auditory stimuli. However, previous studies used abrupt, abstract stimuli that are often perceived as harsh and unpleasant, and whose lack of inherent meaning may make the interface unintuitive and difficult for beginners. We aimed to establish whether we could transition to a system based on more natural, intuitive stimuli (spoken words 'yes' and 'no') without loss of performance, and whether the system could be used by people in the locked-in state.
APPROACH:
We performed a counterbalanced, interleaved within-subject comparison between an auditory streaming BCI that used beep stimuli, and one that used word stimuli. Fourteen healthy volunteers performed two sessions each, on separate days. We also collected preliminary data from two subjects with advanced amyotrophic lateral sclerosis (ALS), who used the word-based system to answer a set of simple yes-no questions.
MAIN RESULTS:
The N1, N2 and P3 event-related potentials elicited by words varied more between subjects than those elicited by beeps. However, the difference between responses to attended and unattended stimuli was more consistent with words than beeps. Healthy subjects' performance with word stimuli (mean 77% ± 3.3 s.e.) was slightly but not significantly better than their performance with beep stimuli (mean 73% ± 2.8 s.e.). The two subjects with ALS used the word-based BCI to answer questions with a level of accuracy similar to that of the healthy subjects.
SIGNIFICANCE:
Since performance using word stimuli was at least as good as performance using beeps, we recommend that auditory streaming BCI systems be built with word stimuli to make the system more pleasant and intuitive. Our preliminary data show that word-based streaming BCI is a promising tool for communication by people who are locked in.</style></abstract><issue><style face="normal" font="default" size="100%">3</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Lester A Gerhardt</style></author><author><style face="normal" font="default" size="100%">H Bischof</style></author><author><style face="normal" font="default" size="100%">Jonathan Wolpaw</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Brain-computer interfaces (BCIs): Detection Instead of Classification.</style></title><secondary-title><style face="normal" font="default" size="100%">J Neurosci Methods</style></secondary-title><alt-title><style face="normal" font="default" size="100%">J. Neurosci. Methods</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Adult</style></keyword><keyword><style  face="normal" font="default" size="100%">Algorithms</style></keyword><keyword><style  face="normal" font="default" size="100%">Brain</style></keyword><keyword><style  face="normal" font="default" size="100%">Brain Mapping</style></keyword><keyword><style  face="normal" font="default" size="100%">Electrocardiography</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Male</style></keyword><keyword><style  face="normal" font="default" size="100%">Man-Machine Systems</style></keyword><keyword><style  face="normal" font="default" size="100%">Normal Distribution</style></keyword><keyword><style  face="normal" font="default" size="100%">Online Systems</style></keyword><keyword><style  face="normal" font="default" size="100%">Signal Detection, Psychological</style></keyword><keyword><style  face="normal" font="default" size="100%">Signal Processing, Computer-Assisted</style></keyword><keyword><style  face="normal" font="default" size="100%">Software Validation</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2008</style></year><pub-dates><date><style  face="normal" font="default" size="100%">01/2008</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/17920134</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">167</style></volume><pages><style face="normal" font="default" size="100%">51-62</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;Many studies over the past two decades have shown that people can use&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;signals to convey their intent to a&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;computer&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;through&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain-computer interfaces&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;(BCIs). These devices operate by recording signals from the&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;and translating these signals into device commands. They can be used by people who are severely paralyzed to communicate without any use of muscle activity. One of the major impediments in translating this novel technology into&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;clinical&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;applications is the current requirement for preliminary analyses to identify the&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;signal features best suited for communication. This paper introduces and validates signal detection, which does not require such analysis procedures, as a new concept in BCI signal processing. This detection concept is realized with Gaussian mixture models (GMMs) that are used to model resting&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;activity so that any change in&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;relevant&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;signals can be detected. It is implemented in a package called SIGFRIED (SIGnal modeling For Real-time Identification and Event Detection). The results indicate that SIGFRIED produces results that are within the range of those achieved using a common analysis strategy that requires preliminary identification of signal features. They indicate that such laborious analysis procedures could be replaced by merely recording&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;signals during rest. In summary, this paper demonstrates how SIGFRIED could be used to overcome one of the present impediments to translation of laboratory BCI demonstrations into clinically practical applications.&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue></record></records></xml>