<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Sam V. Norman-Haignere</style></author><author><style face="normal" font="default" size="100%">Jenelle Feather</style></author><author><style face="normal" font="default" size="100%">Dana Boebinger</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Anthony Ritaccio</style></author><author><style face="normal" font="default" size="100%">Josh H. McDermott</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author><author><style face="normal" font="default" size="100%">Nancy Kanwisher</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A neural population selective for song in human auditory cortex</style></title><secondary-title><style face="normal" font="default" size="100%">Current Biology</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Auditory Cortex</style></keyword><keyword><style  face="normal" font="default" size="100%">component</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">Electrocorticography</style></keyword><keyword><style  face="normal" font="default" size="100%">fMRI</style></keyword><keyword><style  face="normal" font="default" size="100%">music</style></keyword><keyword><style  face="normal" font="default" size="100%">natural sounds</style></keyword><keyword><style  face="normal" font="default" size="100%">song</style></keyword><keyword><style  face="normal" font="default" size="100%">Speech</style></keyword><keyword><style  face="normal" font="default" size="100%">voice</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2022</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://www.sciencedirect.com/science/article/pii/S0960982222001312</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">32</style></volume><pages><style face="normal" font="default" size="100%">1470-1484.e12</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Summary How is music represented in the brain? While neuroimaging has revealed some spatial segregation between responses to music versus other sounds, little is known about the neural code for music itself. To address this question, we developed a method to infer canonical response components of human auditory cortex using intracranial responses to natural sounds, and further used the superior coverage of fMRI to map their spatial distribution. The inferred components replicated many prior findings, including distinct neural selectivity for speech and music, but also revealed a novel component that responded nearly exclusively to music with singing. Song selectivity was not explainable by standard acoustic features, was located near speech- and music-selective responses, and was also evident in individual electrodes. These results suggest that representations of music are fractionated into subpopulations selective for different types of music, one of which is specialized for the analysis of song.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">J.R. Swift</style></author><author><style face="normal" font="default" size="100%">W.G. Coon</style></author><author><style face="normal" font="default" size="100%">C. Guger</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">M. Bunch</style></author><author><style face="normal" font="default" size="100%">T. Lynch</style></author><author><style face="normal" font="default" size="100%">B. Frawley</style></author><author><style face="normal" font="default" size="100%">A.L. Ritaccio</style></author><author><style face="normal" font="default" size="100%">G. Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Passive functional mapping of receptive language areas using electrocorticographic signals</style></title><secondary-title><style face="normal" font="default" size="100%">Clinical Neurophysiology</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">Electrocorticography</style></keyword><keyword><style  face="normal" font="default" size="100%">functional mapping</style></keyword><keyword><style  face="normal" font="default" size="100%">Intracranial</style></keyword><keyword><style  face="normal" font="default" size="100%">Receptive language</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2018</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.sciencedirect.com/science/article/pii/S1388245718312288</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">129</style></volume><pages><style face="normal" font="default" size="100%">2517 - 2524</style></pages><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Kapeller, C</style></author><author><style face="normal" font="default" size="100%">Ogawa, H</style></author><author><style face="normal" font="default" size="100%">Schalk, G</style></author><author><style face="normal" font="default" size="100%">Kunii, N</style></author><author><style face="normal" font="default" size="100%">Coon, WG</style></author><author><style face="normal" font="default" size="100%">Scharinger, J</style></author><author><style face="normal" font="default" size="100%">Guger, C</style></author><author><style face="normal" font="default" size="100%">Kamada, K</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Real-time detection and discrimination of visual perception using electrocorticographic signals</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Neural Engineering</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">BCI</style></keyword><keyword><style  face="normal" font="default" size="100%">Brain–computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">gamma</style></keyword><keyword><style  face="normal" font="default" size="100%">high gamma mapping</style></keyword><keyword><style  face="normal" font="default" size="100%">real-time</style></keyword><keyword><style  face="normal" font="default" size="100%">visual</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2018</style></year><pub-dates><date><style  face="normal" font="default" size="100%">02/2018</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://iopscience.iop.org/article/10.1088/1741-2552/aaa9f6/pdf</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">15</style></volume><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Several neuroimaging studies have demonstrated that the ventral temporal cortex contains specialized regions that process visual stimuli. This study investigated the spatial and temporal dynamics of electrocorticographic (ECoG) responses to different types and colors of visual stimulation that were presented to four human participants, and demonstrated a real-time decoder that detects and discriminates responses to untrained natural images. Approach. ECoG signals from the participants were recorded while they were shown colored and greyscale versions of seven types of visual stimuli (images of faces, objects, bodies, line drawings, digits, and kanji and hiragana characters), resulting in 14 classes for discrimination (experiment I). Additionally, a real-time system asynchronously classified ECoG responses to faces, kanji and black screens presented via a monitor (experiment II), or to natural scenes (i.e. the face of an experimenter, natural images of faces and kanji, and a mirror) (experiment III). Outcome measures in all experiments included the discrimination performance across types based on broadband γ activity. Main results. Experiment I demonstrated an offline classification accuracy of 72.9% when discriminating among the seven types (without color separation). Further discrimination of grey versus colored images reached an accuracy of 67.1%. Discriminating all colors and types (14 classes) yielded an accuracy of 52.1%. In experiment II and III, the real-time decoder correctly detected 73.7% responses to face, kanji and black computer stimuli and 74.8% responses to presented natural scenes. Significance. Seven different types and their color information (either grey or color) could be detected and discriminated using broadband γ activity. Discrimination performance maximized for combined spatial-temporal information. The discrimination of stimulus color information provided the first ECoG-based evidence for color-related population-level cortical broadband γ responses in humans. Stimulus categories can be detected by their ECoG responses in real time within 500 ms with respect to stimulus onset.</style></abstract><issue><style face="normal" font="default" size="100%">3</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Gunduz, Aysegul</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Sharma, Mohit</style></author><author><style face="normal" font="default" size="100%">Leuthardt, Eric C.</style></author><author><style face="normal" font="default" size="100%">Ritaccio, Anthony L.</style></author><author><style face="normal" font="default" size="100%">Pesaran, Bijan</style></author><author><style face="normal" font="default" size="100%">Schalk, Gerwin</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Differential roles of high gamma and local motor potentials for movement preparation and execution</style></title><secondary-title><style face="normal" font="default" size="100%">Brain-Computer Interfaces</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">BCI</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interfaces</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">Electrocorticography</style></keyword><keyword><style  face="normal" font="default" size="100%">sensorimotor systems</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2016</style></year><pub-dates><date><style  face="normal" font="default" size="100%">May</style></date></pub-dates></dates><volume><style face="normal" font="default" size="100%">3</style></volume><pages><style face="normal" font="default" size="100%">88-102</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Determining a person’s intent, such as the planned direction of their movement, directly from their cortical activity could support important applications such as brain-computer interfaces (BCIs). Continuing development of improved BCI systems requires a better understanding of how the brain prepares for and executes movements. To contribute to this understanding, we recorded surface cortical potentials (electrocorticographic signals; ECoG) in 11 human subjects performing a delayed center-out task to establish the differential role of high gamma activity (HGA) and the local motor potential (LMP) as a function of time and anatomical area during movement preparation and execution. High gamma modulations mostly confirm previous findings of sensorimotor cortex involvement, whereas modulations in LMPs are observed in prefrontal cortices. These modulations include directional information during movement planning as well as execution. Our results suggest that sampling signals from these widely distributed cortical areas improves decoding accuracy.</style></abstract><issue><style face="normal" font="default" size="100%">2</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Herff, C.</style></author><author><style face="normal" font="default" size="100%">Heger, D.</style></author><author><style face="normal" font="default" size="100%">Pesters, Adriana de</style></author><author><style face="normal" font="default" size="100%">Telaar, D.</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author><author><style face="normal" font="default" size="100%">Schultz, T.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Brain-to-text: Decoding spoken sentences from phone representations in the brain.</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Neural Engineering</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">automatic speech recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">broadband gamma</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">Electrocorticography</style></keyword><keyword><style  face="normal" font="default" size="100%">pattern recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">speech decoding</style></keyword><keyword><style  face="normal" font="default" size="100%">speech production</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2015</style></year><pub-dates><date><style  face="normal" font="default" size="100%">06/2015</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://journal.frontiersin.org/article/10.3389/fnins.2015.00217/abstract</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To- Text system described in this paper represents an important step toward human-machine communication based on imagined speech.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Kubanek, Jan</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">NeuralAct: A Tool to Visualize Electrocortical (ECoG) Activity on a Three-Dimensional Model of the Cortex.</style></title><secondary-title><style face="normal" font="default" size="100%">Neuroinformatics</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Neuroinformatics</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Brain</style></keyword><keyword><style  face="normal" font="default" size="100%">DOT</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">EEG</style></keyword><keyword><style  face="normal" font="default" size="100%">imaging</style></keyword><keyword><style  face="normal" font="default" size="100%">Matlab</style></keyword><keyword><style  face="normal" font="default" size="100%">MEG</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2015</style></year><pub-dates><date><style  face="normal" font="default" size="100%">04/2015</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/25381641</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">13</style></volume><pages><style face="normal" font="default" size="100%">167-74</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;Electrocorticography (ECoG) records neural signals directly from the surface of the cortex. Due to its high temporal and favorable spatial resolution, ECoG has emerged as a valuable new tool in acquiring cortical activity in cognitive and systems neuroscience. Many studies using ECoG visualized topographies of cortical activity or statistical tests on a three-dimensional model of the cortex, but a dedicated tool for this function has not yet been described. In this paper, we describe the NeuralAct package that serves this purpose. This package takes as input the 3D coordinates of the recording sensors, a cortical model in the same coordinate system (e.g., Talairach), and the activation data to be visualized at each sensor. It then aligns the sensor coordinates with the cortical model, convolves the activation data with a spatial kernel, and renders the resulting activations in color on the cortical model. The NeuralAct package can plot cortical activations of an individual subject as well as activations averaged over subjects. It is capable to render single images as well as sequences of images. The software runs under Matlab and is stable and robust. We here provide the tool and describe its visualization capabilities and procedures. The provided package contains thoroughly documented code and includes a simple demo that guides the researcher through the functionality of the tool.&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">2</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Stephen, Emily P</style></author><author><style face="normal" font="default" size="100%">Lepage, Kyle Q</style></author><author><style face="normal" font="default" size="100%">Eden, Uri T</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author><author><style face="normal" font="default" size="100%">Jonathan S Brumberg</style></author><author><style face="normal" font="default" size="100%">Guenther, Frank H</style></author><author><style face="normal" font="default" size="100%">Kramer, Mark A</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses.</style></title><secondary-title><style face="normal" font="default" size="100%">Frontiers in Computational Neuroscience</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">canonical correlation</style></keyword><keyword><style  face="normal" font="default" size="100%">coherence</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">EEG</style></keyword><keyword><style  face="normal" font="default" size="100%">functional connectivity</style></keyword><keyword><style  face="normal" font="default" size="100%">MEG</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">03/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/24678295</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">8</style></volume><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty—both in the functional network edges and the corresponding aggregate measures of network topology—are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here—appropriate for static and dynamic network inference and different statistical measures of coupling—permits the evaluation of confidence in network measures in a variety of settings common to neuroscience.</style></abstract><issue><style face="normal" font="default" size="100%">31</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Jonathan Wolpaw</style></author><author><style face="normal" font="default" size="100%">E. Winter-Wolpaw</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">BCIs That Use Electrocorticographic Activity.</style></title><secondary-title><style face="normal" font="default" size="100%">Brain-Computer Interfaces: Principles and Practice</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">brain signals</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interfaces</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">intracortically recorded signals</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2012</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780195388855.001.0001/acprof-9780195388855-chapter-015</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Oxford University Press</style></publisher><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">This chapter discusses the potential of electrocorticography (ECoG) as a clinically useful brain-computer interface signal modality. ECoG has greater amplitude, higher topographical resolution, and a much broader frequency range than scalp-recorded electroencephalography and is less susceptible to artifacts. With current and foreseeable recording methodologies, ECoG is likely to have greater long-term stability than intracortically recorded signals. Furthermore, it can more readily be recorded from larger cortical areas, and it requires much lower digitization rates, thus greatly reducing the power requirements of wholly implanted systems.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Wang, Z.</style></author><author><style face="normal" font="default" size="100%">Gunduz, Aysegul</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">A L Ritaccio</style></author><author><style face="normal" font="default" size="100%">Ji, Q</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Decoding Onset and Direction of Movements using Electrocorticographic (ECoG) Signals in Humans.</style></title><secondary-title><style face="normal" font="default" size="100%">Frontiers in Neuroengineering</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">brain computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">movement direction prediction</style></keyword><keyword><style  face="normal" font="default" size="100%">movement onset prediction</style></keyword><keyword><style  face="normal" font="default" size="100%">neurorehabilitation</style></keyword><keyword><style  face="normal" font="default" size="100%">performance augmentation</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2012</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/22891058</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">5</style></volume><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Communication of intent usually requires motor function. This requirement can be limiting when a person is engaged in a task, or prohibitive for some people suffering from neuromuscular disorders. Determining a person's intent, e.g., where and when to move, from brain signals rather than from muscles would have important applications in clinical or other domains. For example, detection of the onset and direction of intended movements may provide the basis for restoration of simple grasping function in people with chronic stroke, or could be used to optimize a user's interaction with the surrounding environment. Detecting the onset and direction of actual movements are a first step in this direction. In this study, we demonstrate that we can detect the onset of intended movements and their direction using electrocorticographic (ECoG) signals recorded from the surface of the cortex in humans. We also demonstrate in a simulation that the information encoded in ECoG about these movements may improve performance in a targeting task. In summary, the results in this paper suggest that detection of intended movement is possible, and may serve useful functions.</style></abstract><issue><style face="normal" font="default" size="100%">15</style></issue></record></records></xml>