<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Herff, C.</style></author><author><style face="normal" font="default" size="100%">Heger, D.</style></author><author><style face="normal" font="default" size="100%">Pesters, Adriana de</style></author><author><style face="normal" font="default" size="100%">Telaar, D.</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author><author><style face="normal" font="default" size="100%">Schultz, T.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Brain-to-text: Decoding spoken sentences from phone representations in the brain.</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Neural Engineering</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">automatic speech recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">broadband gamma</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">Electrocorticography</style></keyword><keyword><style  face="normal" font="default" size="100%">pattern recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">speech decoding</style></keyword><keyword><style  face="normal" font="default" size="100%">speech production</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2015</style></year><pub-dates><date><style  face="normal" font="default" size="100%">06/2015</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://journal.frontiersin.org/article/10.3389/fnins.2015.00217/abstract</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To- Text system described in this paper represents an important step toward human-machine communication based on imagined speech.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Kubanek, Jan</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">NeuralAct: A Tool to Visualize Electrocortical (ECoG) Activity on a Three-Dimensional Model of the Cortex.</style></title><secondary-title><style face="normal" font="default" size="100%">Neuroinformatics</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Neuroinformatics</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Brain</style></keyword><keyword><style  face="normal" font="default" size="100%">DOT</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">EEG</style></keyword><keyword><style  face="normal" font="default" size="100%">imaging</style></keyword><keyword><style  face="normal" font="default" size="100%">Matlab</style></keyword><keyword><style  face="normal" font="default" size="100%">MEG</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2015</style></year><pub-dates><date><style  face="normal" font="default" size="100%">04/2015</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/25381641</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">13</style></volume><pages><style face="normal" font="default" size="100%">167-74</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Electrocorticography (ECoG) records neural signals directly from the surface of the cortex. Due to its high temporal and favorable spatial resolution, ECoG has emerged as a valuable new tool in acquiring cortical activity in cognitive and systems neuroscience. Many studies using ECoG visualized topographies of cortical activity or statistical tests on a three-dimensional model of the cortex, but a dedicated tool for this function has not yet been described. In this paper, we describe the NeuralAct package that serves this purpose. This package takes as input the 3D coordinates of the recording sensors, a cortical model in the same coordinate system (e.g., Talairach), and the activation data to be visualized at each sensor. It then aligns the sensor coordinates with the cortical model, convolves the activation data with a spatial kernel, and renders the resulting activations in color on the cortical model. The NeuralAct package can plot cortical activations of an individual subject as well as activations averaged over subjects. It is capable to render single images as well as sequences of images. The software runs under Matlab and is stable and robust. We here provide the tool and describe its visualization capabilities and procedures. The provided package contains thoroughly documented code and includes a simple demo that guides the researcher through the functionality of the tool.&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">2</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Stephen, Emily P</style></author><author><style face="normal" font="default" size="100%">Lepage, Kyle Q</style></author><author><style face="normal" font="default" size="100%">Eden, Uri T</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author><author><style face="normal" font="default" size="100%">Jonathan S Brumberg</style></author><author><style face="normal" font="default" size="100%">Guenther, Frank H</style></author><author><style face="normal" font="default" size="100%">Kramer, Mark A</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses.</style></title><secondary-title><style face="normal" font="default" size="100%">Frontiers in Computational Neuroscience</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">canonical correlation</style></keyword><keyword><style  face="normal" font="default" size="100%">coherence</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">EEG</style></keyword><keyword><style  face="normal" font="default" size="100%">functional connectivity</style></keyword><keyword><style  face="normal" font="default" size="100%">MEG</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">03/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/24678295</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">8</style></volume><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty—both in the functional network edges and the corresponding aggregate measures of network topology—are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here—appropriate for static and dynamic network inference and different statistical measures of coupling—permits the evaluation of confidence in network measures in a variety of settings common to neuroscience.</style></abstract><issue><style face="normal" font="default" size="100%">31</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Jonathan Wolpaw</style></author><author><style face="normal" font="default" size="100%">E. Winter-Wolpaw</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">BCIs That Use Electrocorticographic Activity.</style></title><secondary-title><style face="normal" font="default" size="100%">Brain-Computer Interfaces: Principles and Practice</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">brain signals</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interfaces</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">intracortically recorded signals</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2012</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780195388855.001.0001/acprof-9780195388855-chapter-015</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Oxford University Press</style></publisher><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">This chapter discusses the potential of electrocorticography (ECoG) as a clinically useful brain-computer interface signal modality. ECoG has greater amplitude, higher topographical resolution, and a much broader frequency range than scalp-recorded electroencephalography and is less susceptible to artifacts. With current and foreseeable recording methodologies, ECoG is likely to have greater long-term stability than intracortically recorded signals. Furthermore, it can more readily be recorded from larger cortical areas, and it requires much lower digitization rates, thus greatly reducing the power requirements of wholly implanted systems.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Wang, Z.</style></author><author><style face="normal" font="default" size="100%">Gunduz, Aysegul</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">A L Ritaccio</style></author><author><style face="normal" font="default" size="100%">Ji, Q</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Decoding Onset and Direction of Movements using Electrocorticographic (ECoG) Signals in Humans.</style></title><secondary-title><style face="normal" font="default" size="100%">Frontiers in Neuroengineering</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">brain computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">movement direction prediction</style></keyword><keyword><style  face="normal" font="default" size="100%">movement onset prediction</style></keyword><keyword><style  face="normal" font="default" size="100%">neurorehabilitation</style></keyword><keyword><style  face="normal" font="default" size="100%">performance augmentation</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2012</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/22891058</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">5</style></volume><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Communication of intent usually requires motor function. This requirement can be limiting when a person is engaged in a task, or prohibitive for some people suffering from neuromuscular disorders. Determining a person's intent, e.g., where and when to move, from brain signals rather than from muscles would have important applications in clinical or other domains. For example, detection of the onset and direction of intended movements may provide the basis for restoration of simple grasping function in people with chronic stroke, or could be used to optimize a user's interaction with the surrounding environment. Detecting the onset and direction of actual movements are a first step in this direction. In this study, we demonstrate that we can detect the onset of intended movements and their direction using electrocorticographic (ECoG) signals recorded from the surface of the cortex in humans. We also demonstrate in a simulation that the information encoded in ECoG about these movements may improve performance in a targeting task. In summary, the results in this paper suggest that detection of intended movement is possible, and may serve useful functions.</style></abstract><issue><style face="normal" font="default" size="100%">15</style></issue></record></records></xml>