<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Herff, C.</style></author><author><style face="normal" font="default" size="100%">Heger, D.</style></author><author><style face="normal" font="default" size="100%">Pesters, Adriana de</style></author><author><style face="normal" font="default" size="100%">Telaar, D.</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author><author><style face="normal" font="default" size="100%">Schultz, T.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Brain-to-text: Decoding spoken sentences from phone representations in the brain.</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Neural Engineering</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">automatic speech recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">broadband gamma</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">Electrocorticography</style></keyword><keyword><style  face="normal" font="default" size="100%">pattern recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">speech decoding</style></keyword><keyword><style  face="normal" font="default" size="100%">speech production</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2015</style></year><pub-dates><date><style  face="normal" font="default" size="100%">06/2015</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://journal.frontiersin.org/article/10.3389/fnins.2015.00217/abstract</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To- Text system described in this paper represents an important step toward human-machine communication based on imagined speech.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Häuser, Ann-Katrin</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A general method for assessing brain–computer interface performance and its limitations.</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Neural Engineering</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">information gain</style></keyword><keyword><style  face="normal" font="default" size="100%">information transfer rate</style></keyword><keyword><style  face="normal" font="default" size="100%">Neuroprosthetics</style></keyword><keyword><style  face="normal" font="default" size="100%">performance evaluation</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">03/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/24658406</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">11</style></volume><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Objective. When researchers evaluate brain–computer interface (BCI) systems, we want quantitative answers to questions such as: How good is the system's performance? How good does it need to be? and: Is it capable of reaching the desired level in future? In response to the current lack of objective, quantitative, study-independent approaches, we introduce methods that help to address such questions. We identified three challenges: (I) the need for efficient measurement techniques that adapt rapidly and reliably to capture a wide range of performance levels; (II) the need to express results in a way that allows comparison between similar but non-identical tasks; (III) the need to measure the extent to which certain components of a BCI system (e.g. the signal processing pipeline) not only support BCI performance, but also potentially restrict the maximum level it can reach. Approach. For challenge (I), we developed an automatic staircase method that adjusted task difficulty adaptively along a single abstract axis. For challenge (II), we used the rate of information gain between two Bernoulli distributions: one reflecting the observed success rate, the other reflecting chance performance estimated by a matched random-walk method. This measure includes Wolpaw's information transfer rate as a special case, but addresses the latter's limitations including its restriction to item-selection tasks. To validate our approach and address challenge (III), we compared four healthy subjects' performance using an EEG-based BCI, a 'Direct Controller' (a high-performance hardware input device), and a 'Pseudo-BCI Controller' (the same input device, but with control signals processed by the BCI signal processing pipeline). Main results. Our results confirm the repeatability and validity of our measures, and indicate that our BCI signal processing pipeline reduced attainable performance by about 33% (21 bits/min). Significance. Our approach provides a flexible basis for evaluating BCI performance and its limitations, across a wide range of tasks and task difficulties.</style></abstract><issue><style face="normal" font="default" size="100%">026018</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">A L Ritaccio</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Gunduz, Aysegul</style></author><author><style face="normal" font="default" size="100%">Hermes, Dora</style></author><author><style face="normal" font="default" size="100%">Hirsch, Lawrence J</style></author><author><style face="normal" font="default" size="100%">Jacobs, Joshua</style></author><author><style face="normal" font="default" size="100%">Kamada, Kyousuke</style></author><author><style face="normal" font="default" size="100%">Kastner, Sabine</style></author><author><style face="normal" font="default" size="100%">Robert T. Knight</style></author><author><style face="normal" font="default" size="100%">Lesser, Ronald P</style></author><author><style face="normal" font="default" size="100%">Miller, Kai</style></author><author><style face="normal" font="default" size="100%">Sejnowski, Terrence</style></author><author><style face="normal" font="default" size="100%">Worrell, Gregory</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Proceedings of the Fifth International Workshop on Advances in Electrocorticography.</style></title><secondary-title><style face="normal" font="default" size="100%">Epilepsy Behav</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Epilepsy Behav</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Brain Mapping</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">electrical stimulation mapping</style></keyword><keyword><style  face="normal" font="default" size="100%">Electrocorticography</style></keyword><keyword><style  face="normal" font="default" size="100%">functional mapping</style></keyword><keyword><style  face="normal" font="default" size="100%">Gamma-frequency electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">High-frequency oscillations</style></keyword><keyword><style  face="normal" font="default" size="100%">Neuroprosthetics</style></keyword><keyword><style  face="normal" font="default" size="100%">Seizure detection</style></keyword><keyword><style  face="normal" font="default" size="100%">Subdural grid</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">12/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/25461213</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">41</style></volume><pages><style face="normal" font="default" size="100%">183-92</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;The Fifth International Workshop on Advances in Electrocorticography convened in San Diego, CA, on November 7-8, 2013. Advancements in methodology, implementation, and commercialization across both research and in the interval year since the last workshop were the focus of the gathering. Electrocorticography (ECoG) is now firmly established as a preferred signal source for advanced research in functional, cognitive, and neuroprosthetic domains. Published output in ECoG fields has increased tenfold in the past decade. These proceedings attempt to summarize the state of the art.&lt;/p&gt;</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Moinuddin, Aisha</style></author><author><style face="normal" font="default" size="100%">Häuser, Ann-Katrin</style></author><author><style face="normal" font="default" size="100%">Kienzle, Stephan</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Communication and control by listening: towards optimal design of a two-class auditory streaming brain-computer interface.</style></title><secondary-title><style face="normal" font="default" size="100%">Frontiers in Neuroscience</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">auditory attention</style></keyword><keyword><style  face="normal" font="default" size="100%">auditory event-related potentials</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">dichotic listening</style></keyword><keyword><style  face="normal" font="default" size="100%">N1 potential</style></keyword><keyword><style  face="normal" font="default" size="100%">P3 potential</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">12/2012</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/23267312</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">6</style></volume><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one’s eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">A L Ritaccio</style></author><author><style face="normal" font="default" size="100%">Beauchamp, Michael</style></author><author><style face="normal" font="default" size="100%">Bosman, Conrado</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Chang, Edward</style></author><author><style face="normal" font="default" size="100%">Nathan E. Crone</style></author><author><style face="normal" font="default" size="100%">Gunduz, Aysegul</style></author><author><style face="normal" font="default" size="100%">Disha Gupta</style></author><author><style face="normal" font="default" size="100%">Robert T. Knight</style></author><author><style face="normal" font="default" size="100%">Leuthardt, Eric</style></author><author><style face="normal" font="default" size="100%">Litt, Brian</style></author><author><style face="normal" font="default" size="100%">Moran, Daniel</style></author><author><style face="normal" font="default" size="100%">Ojemann, Jeffrey</style></author><author><style face="normal" font="default" size="100%">Parvizi, Josef</style></author><author><style face="normal" font="default" size="100%">Ramsey, Nick</style></author><author><style face="normal" font="default" size="100%">Rieger, Jochem</style></author><author><style face="normal" font="default" size="100%">Viventi, Jonathan</style></author><author><style face="normal" font="default" size="100%">Voytek, Bradley</style></author><author><style face="normal" font="default" size="100%">Williams, Justin</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Proceedings of the Third International Workshop on Advances in Electrocorticography.</style></title><secondary-title><style face="normal" font="default" size="100%">Epilepsy Behav</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Epilepsy Behav</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Brain Mapping</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">Electrocorticography</style></keyword><keyword><style  face="normal" font="default" size="100%">Gamma-frequency electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">high-frequency oscillation</style></keyword><keyword><style  face="normal" font="default" size="100%">Neuroprosthetics</style></keyword><keyword><style  face="normal" font="default" size="100%">Seizure detection</style></keyword><keyword><style  face="normal" font="default" size="100%">Subdural grid</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">12/2012</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/23160096</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">25</style></volume><pages><style face="normal" font="default" size="100%">605-13</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">The Third International Workshop on Advances in Electrocorticography (ECoG) was convened in Washington, DC, on November 10-11, 2011. As in prior meetings, a true multidisciplinary fusion of clinicians, scientists, and engineers from many disciplines gathered to summarize contemporary experiences in brain surface recordings. The proceedings of this meeting serve as evidence of a very robust and transformative field but will yet again require revision to incorporate the advances that the following year will surely bring.</style></abstract><issue><style face="normal" font="default" size="100%">4</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Tangermann, M.</style></author><author><style face="normal" font="default" size="100%">Muller, K.R.</style></author><author><style face="normal" font="default" size="100%">Aertsen, A.</style></author><author><style face="normal" font="default" size="100%">Niels Birbaumer</style></author><author><style face="normal" font="default" size="100%">Christoph Braun</style></author><author><style face="normal" font="default" size="100%">Brunner, Clemens</style></author><author><style face="normal" font="default" size="100%">Leeb, R.</style></author><author><style face="normal" font="default" size="100%">Mehring, C.</style></author><author><style face="normal" font="default" size="100%">Miller, K.J.</style></author><author><style face="normal" font="default" size="100%">Mueller-Putz, G.</style></author><author><style face="normal" font="default" size="100%">Nolte, G.</style></author><author><style face="normal" font="default" size="100%">Pfurtscheller, G.</style></author><author><style face="normal" font="default" size="100%">Preissl, H.</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author><author><style face="normal" font="default" size="100%">Schlögl, A.</style></author><author><style face="normal" font="default" size="100%">Vidaurre, C.</style></author><author><style face="normal" font="default" size="100%">Waldert, S.</style></author><author><style face="normal" font="default" size="100%">Benjamin Blankertz</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Review of the BCI Competition IV.</style></title><secondary-title><style face="normal" font="default" size="100%">Frontiers in Neuroprosthetics</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">BCI</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">competition</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">07/2012</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/22811657</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">6</style></volume><pages><style face="normal" font="default" size="100%">1-31</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">The BCI competition IV stands in the tradition of prior BCI competitions that aim to provide high quality neuroscientific data for open access to the scientific community. As experienced already in prior competitions not only scientists from the narrow field of BCI compete, but scholars with a broad variety of backgrounds and nationalities. They include high specialists as well as students. The goals of all BCI competitions have always been to challenge with respect to novel paradigms and complex data. We report on the following challenges: (1) asynchronous data, (2) synthetic, (3) multi-class continuous data, (4) session-to-session transfer, (5) directionally modulated MEG, (6) finger movements recorded by ECoG. As after past competitions, our hope is that winning entries may enhance the analysis methods of future BCIs.</style></abstract><issue><style face="normal" font="default" size="100%">55</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Zuoguan Wang</style></author><author><style face="normal" font="default" size="100%">Ji, Q</style></author><author><style face="normal" font="default" size="100%">Miller, John W</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Prior knowledge improves decoding of finger flexion from electrocorticographic signals.</style></title><secondary-title><style face="normal" font="default" size="100%">Front Neurosci</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Front Neurosci</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">decoding algorithm</style></keyword><keyword><style  face="normal" font="default" size="100%">electrocorticographic</style></keyword><keyword><style  face="normal" font="default" size="100%">finger flexion</style></keyword><keyword><style  face="normal" font="default" size="100%">machine learning</style></keyword><keyword><style  face="normal" font="default" size="100%">prior knowledge</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2011</style></year><pub-dates><date><style  face="normal" font="default" size="100%">11/2011</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/22144944</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">5</style></volume><pages><style face="normal" font="default" size="100%">127</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;Brain-computer interfaces (BCIs) use brain signals to convey a user's intent. Some BCI approaches begin by decoding kinematic parameters of movements from brain signals, and then proceed to using these signals, in absence of movements, to allow a user to control an output. Recent results have shown that electrocorticographic (ECoG) recordings from the surface of the brain in humans can give information about kinematic parameters (e.g., hand velocity or finger flexion). The decoding approaches in these studies usually employed classical classification/regression algorithms that derive a linear mapping between brain signals and outputs. However, they typically only incorporate little prior information about the target movement parameter. In this paper, we incorporate prior knowledge using a Bayesian decoding method, and use it to decode finger flexion from ECoG signals. Specifically, we exploit the constraints that govern finger flexion and incorporate these constraints in the construction, structure, and the probabilistic functions of the prior model of a switched non-parametric dynamic system (SNDS). Given a measurement model resulting from a traditional linear regression method, we decoded finger flexion using posterior estimation that combined the prior and measurement models. Our results show that the application of the Bayesian decoding model, which incorporates prior knowledge, improves decoding performance compared to the application of a linear regression model, which does not incorporate prior knowledge. Thus, the results presented in this paper may ultimately lead to neurally controlled hand prostheses with full fine-grained finger articulation.&lt;/span&gt;&lt;/p&gt;</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">A L Ritaccio</style></author><author><style face="normal" font="default" size="100%">Emrich, Joseph F</style></author><author><style face="normal" font="default" size="100%">H Bischof</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Rapid Communication with a &quot;P300&quot; Matrix Speller Using Electrocorticographic Signals (ECoG).</style></title><secondary-title><style face="normal" font="default" size="100%">Front Neurosci</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Front Neurosci</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">Electrocorticography</style></keyword><keyword><style  face="normal" font="default" size="100%">event-related potential</style></keyword><keyword><style  face="normal" font="default" size="100%">P300</style></keyword><keyword><style  face="normal" font="default" size="100%">speller</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2011</style></year><pub-dates><date><style  face="normal" font="default" size="100%">02/2011</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/21369351</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">5</style></volume><pages><style face="normal" font="default" size="100%">5</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;A&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;brain-computer interface&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;(&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;BCI&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;) can provide a non-muscular communication channel to severely disabled people. One particular realization of a&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;BCI&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;is the P300 matrix speller that was originally described by Farwell and Donchin (1988). This speller uses event-related potentials (ERPs) that include the P300 ERP. All previous online studies of the P300 matrix speller used scalp-recorded electroencephalography (EEG) and were limited in their communication performance to only a few characters per minute. In our study, we investigated the feasibility of using electrocorticographic (ECoG) signals for online operation of the matrix speller, and determined associated spelling rates. We used the matrix speller that is implemented in the BCI2000 system. This speller used ECoG signals that were recorded from frontal, parietal, and occipital areas in one subject. This subject spelled a total of 444 characters in online experiments. The results showed that the subject sustained a rate of 17&amp;thinsp;characters/min (i.e., 69&amp;thinsp;bits/min), and achieved a peak rate of 22&amp;thinsp;characters/min (i.e., 113&amp;thinsp;bits/min). Detailed analysis of the results suggests that ERPs over visual areas (i.e., visual evoked potentials) contribute significantly to the performance of the matrix speller&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;BCI&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;system. Our results also point to potential reasons for the apparent advantages in spelling performance of ECoG compared to EEG. Thus, with additional verification in more subjects, these results may further extend the communication options for people with serious neuromuscular disabilities.&lt;/span&gt;&lt;/p&gt;</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Brain-Computer Interaction.</style></title><secondary-title><style face="normal" font="default" size="100%">5th Intl. Conference on Augmented Cognition</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">BCI</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">neural engineering</style></keyword><keyword><style  face="normal" font="default" size="100%">neural prosthesis</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2009</style></year><pub-dates><date><style  face="normal" font="default" size="100%">2009</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://link.springer.com/chapter/10.1007%2F978-3-642-02812-0_81</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Springer</style></publisher><isbn><style face="normal" font="default" size="100%">978-3-642-02811-3</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;color: #333333; font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif; font-size: 13px; line-height: 20px;&quot;&gt;Detection and automated interpretation of attention-related or intention-related brain activity carries significant promise for many military and civilian applications. This interpretation of brain activity could provide information about a person’s intended movements, imagined movements, or attentional focus, and thus could be valuable for optimizing or replacing traditional motor-based communication between a person and a computer or other output devices. We describe here the objective and preliminary results of our studies in this area.&lt;/span&gt;&lt;/p&gt;</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Krusienski, Dean J</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author><author><style face="normal" font="default" size="100%">Dennis J. McFarland</style></author><author><style face="normal" font="default" size="100%">Jonathan Wolpaw</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Tracking of the mu rhythm using an empirically derived matched filter.</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. IEEE International Conference of Neural Engineering</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">bioelectric potentials</style></keyword><keyword><style  face="normal" font="default" size="100%">Brain Computer Interfaces</style></keyword><keyword><style  face="normal" font="default" size="100%">brain modeling</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">communication device</style></keyword><keyword><style  face="normal" font="default" size="100%">communication system control</style></keyword><keyword><style  face="normal" font="default" size="100%">cortical mu rhythm modulation</style></keyword><keyword><style  face="normal" font="default" size="100%">EEG</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">empirically derived matched filter</style></keyword><keyword><style  face="normal" font="default" size="100%">handicapped aids</style></keyword><keyword><style  face="normal" font="default" size="100%">laboratories</style></keyword><keyword><style  face="normal" font="default" size="100%">matched filters</style></keyword><keyword><style  face="normal" font="default" size="100%">medical signal detection</style></keyword><keyword><style  face="normal" font="default" size="100%">medical signal processing</style></keyword><keyword><style  face="normal" font="default" size="100%">monitoring</style></keyword><keyword><style  face="normal" font="default" size="100%">motor imagery</style></keyword><keyword><style  face="normal" font="default" size="100%">mu rhythm tracking</style></keyword><keyword><style  face="normal" font="default" size="100%">noninvasive treatment</style></keyword><keyword><style  face="normal" font="default" size="100%">rhythm</style></keyword><keyword><style  face="normal" font="default" size="100%">synchronous motors</style></keyword><keyword><style  face="normal" font="default" size="100%">two-dimensional cursor control data</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2005</style></year><pub-dates><date><style  face="normal" font="default" size="100%">03/2005</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=1419559</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><pub-location><style face="normal" font="default" size="100%">Arlington, VA</style></pub-location><isbn><style face="normal" font="default" size="100%">0-7803-8710-4</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Benjamin Blankertz</style></author><author><style face="normal" font="default" size="100%">Müller, Klaus-Robert</style></author><author><style face="normal" font="default" size="100%">Curio, Gabriel</style></author><author><style face="normal" font="default" size="100%">Theresa M Vaughan</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author><author><style face="normal" font="default" size="100%">Jonathan Wolpaw</style></author><author><style face="normal" font="default" size="100%">Schlögl, Alois</style></author><author><style face="normal" font="default" size="100%">Neuper, Christa</style></author><author><style face="normal" font="default" size="100%">Pfurtscheller, Gert</style></author><author><style face="normal" font="default" size="100%">Hinterberger, Thilo</style></author><author><style face="normal" font="default" size="100%">Schröder, Michael</style></author><author><style face="normal" font="default" size="100%">Niels Birbaumer</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">The BCI Competition 2003: progress and perspectives in detection and discrimination of EEG single trials.</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE transactions on bio-medical engineering</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">augmentative communication</style></keyword><keyword><style  face="normal" font="default" size="100%">BCI</style></keyword><keyword><style  face="normal" font="default" size="100%">beta-rhythm</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">EEG</style></keyword><keyword><style  face="normal" font="default" size="100%">ERP</style></keyword><keyword><style  face="normal" font="default" size="100%">imagined hand movements</style></keyword><keyword><style  face="normal" font="default" size="100%">lateralized readiness potential</style></keyword><keyword><style  face="normal" font="default" size="100%">mu-rhythm</style></keyword><keyword><style  face="normal" font="default" size="100%">P300</style></keyword><keyword><style  face="normal" font="default" size="100%">Rehabilitation</style></keyword><keyword><style  face="normal" font="default" size="100%">single-trial classification</style></keyword><keyword><style  face="normal" font="default" size="100%">slow cortical potentials</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2004</style></year><pub-dates><date><style  face="normal" font="default" size="100%">06/2004</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/15188876</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">51</style></volume><pages><style face="normal" font="default" size="100%">1044–1051</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Interest in developing a new method of man-to-machine communication–a brain-computer interface (BCI)–has grown steadily over the past few decades. BCIs create a new communication channel between the brain and an output device by bypassing conventional motor output pathways of nerves and muscles. These systems use signals recorded from the scalp, the surface of the cortex, or from inside the brain to enable users to control a variety of applications including simple word-processing software and orthotics. BCI technology could therefore provide a new communication and control option for individuals who cannot otherwise express their wishes to the outside world. Signal processing and classification methods are essential tools in the development of improved BCI technology. We organized the BCI Competition 2003 to evaluate the current state of the art of these tools. Four laboratories well versed in EEG-based BCI research provided six data sets in a documented format. We made these data sets (i.e., labeled training sets and unlabeled test sets) and their descriptions available on the Internet. The goal in the competition was to maximize the performance measure for the test labels. Researchers worldwide tested their algorithms and competed for the best classification results. This paper describes the six data sets and the results and function of the most successful algorithms.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author><author><style face="normal" font="default" size="100%">Jonathan Wolpaw</style></author><author><style face="normal" font="default" size="100%">Dennis J. McFarland</style></author><author><style face="normal" font="default" size="100%">Pfurtscheller, G.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">EEG-based communication: presence of an error potential.</style></title><secondary-title><style face="normal" font="default" size="100%">Clinical neurophysiology : official journal of the International Federation of Clinical Neurophysiology</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">augmentative communication</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">error potential</style></keyword><keyword><style  face="normal" font="default" size="100%">error related negativity</style></keyword><keyword><style  face="normal" font="default" size="100%">event related potential</style></keyword><keyword><style  face="normal" font="default" size="100%">mu rhythm</style></keyword><keyword><style  face="normal" font="default" size="100%">Rehabilitation</style></keyword><keyword><style  face="normal" font="default" size="100%">sensorimotor cortex</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2000</style></year><pub-dates><date><style  face="normal" font="default" size="100%">12/2000</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/11090763</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">111</style></volume><pages><style face="normal" font="default" size="100%">2138–2144</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">EEG-based communication could be a valuable new augmentative communication technology for those with severe motor disabilities. Like all communication methods, it faces the problem of errors in transmission. In the Wadsworth EEG-based brain-computer interface (BCI) system, subjects learn to use mu or beta rhythm amplitude to move a cursor to targets on a computer screen. While cursor movement is highly accurate in trained subjects, it is not perfect.</style></abstract></record></records></xml>