<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Sam V. Norman-Haignere</style></author><author><style face="normal" font="default" size="100%">Jenelle Feather</style></author><author><style face="normal" font="default" size="100%">Dana Boebinger</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Anthony Ritaccio</style></author><author><style face="normal" font="default" size="100%">Josh H. McDermott</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author><author><style face="normal" font="default" size="100%">Nancy Kanwisher</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A neural population selective for song in human auditory cortex</style></title><secondary-title><style face="normal" font="default" size="100%">Current Biology</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Auditory Cortex</style></keyword><keyword><style  face="normal" font="default" size="100%">component</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">Electrocorticography</style></keyword><keyword><style  face="normal" font="default" size="100%">fMRI</style></keyword><keyword><style  face="normal" font="default" size="100%">music</style></keyword><keyword><style  face="normal" font="default" size="100%">natural sounds</style></keyword><keyword><style  face="normal" font="default" size="100%">song</style></keyword><keyword><style  face="normal" font="default" size="100%">Speech</style></keyword><keyword><style  face="normal" font="default" size="100%">voice</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2022</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://www.sciencedirect.com/science/article/pii/S0960982222001312</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">32</style></volume><pages><style face="normal" font="default" size="100%">1470-1484.e12</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Summary How is music represented in the brain? While neuroimaging has revealed some spatial segregation between responses to music versus other sounds, little is known about the neural code for music itself. To address this question, we developed a method to infer canonical response components of human auditory cortex using intracranial responses to natural sounds, and further used the superior coverage of fMRI to map their spatial distribution. The inferred components replicated many prior findings, including distinct neural selectivity for speech and music, but also revealed a novel component that responded nearly exclusively to music with singing. Song selectivity was not explainable by standard acoustic features, was located near speech- and music-selective responses, and was also evident in individual electrodes. These results suggest that representations of music are fractionated into subpopulations selective for different types of music, one of which is specialized for the analysis of song.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">J.R. Swift</style></author><author><style face="normal" font="default" size="100%">W.G. Coon</style></author><author><style face="normal" font="default" size="100%">C. Guger</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">M. Bunch</style></author><author><style face="normal" font="default" size="100%">T. Lynch</style></author><author><style face="normal" font="default" size="100%">B. Frawley</style></author><author><style face="normal" font="default" size="100%">A.L. Ritaccio</style></author><author><style face="normal" font="default" size="100%">G. Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Passive functional mapping of receptive language areas using electrocorticographic signals</style></title><secondary-title><style face="normal" font="default" size="100%">Clinical Neurophysiology</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">Electrocorticography</style></keyword><keyword><style  face="normal" font="default" size="100%">functional mapping</style></keyword><keyword><style  face="normal" font="default" size="100%">Intracranial</style></keyword><keyword><style  face="normal" font="default" size="100%">Receptive language</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2018</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.sciencedirect.com/science/article/pii/S1388245718312288</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">129</style></volume><pages><style face="normal" font="default" size="100%">2517 - 2524</style></pages><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Gunduz, Aysegul</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Sharma, Mohit</style></author><author><style face="normal" font="default" size="100%">Leuthardt, Eric C.</style></author><author><style face="normal" font="default" size="100%">Ritaccio, Anthony L.</style></author><author><style face="normal" font="default" size="100%">Pesaran, Bijan</style></author><author><style face="normal" font="default" size="100%">Schalk, Gerwin</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Differential roles of high gamma and local motor potentials for movement preparation and execution</style></title><secondary-title><style face="normal" font="default" size="100%">Brain-Computer Interfaces</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">BCI</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interfaces</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">Electrocorticography</style></keyword><keyword><style  face="normal" font="default" size="100%">sensorimotor systems</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2016</style></year><pub-dates><date><style  face="normal" font="default" size="100%">May</style></date></pub-dates></dates><volume><style face="normal" font="default" size="100%">3</style></volume><pages><style face="normal" font="default" size="100%">88-102</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Determining a person’s intent, such as the planned direction of their movement, directly from their cortical activity could support important applications such as brain-computer interfaces (BCIs). Continuing development of improved BCI systems requires a better understanding of how the brain prepares for and executes movements. To contribute to this understanding, we recorded surface cortical potentials (electrocorticographic signals; ECoG) in 11 human subjects performing a delayed center-out task to establish the differential role of high gamma activity (HGA) and the local motor potential (LMP) as a function of time and anatomical area during movement preparation and execution. High gamma modulations mostly confirm previous findings of sensorimotor cortex involvement, whereas modulations in LMPs are observed in prefrontal cortices. These modulations include directional information during movement planning as well as execution. Our results suggest that sampling signals from these widely distributed cortical areas improves decoding accuracy.</style></abstract><issue><style face="normal" font="default" size="100%">2</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Herff, C.</style></author><author><style face="normal" font="default" size="100%">Heger, D.</style></author><author><style face="normal" font="default" size="100%">Pesters, Adriana de</style></author><author><style face="normal" font="default" size="100%">Telaar, D.</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author><author><style face="normal" font="default" size="100%">Schultz, T.</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Brain-to-text: Decoding spoken sentences from phone representations in the brain.</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Neural Engineering</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">automatic speech recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">brain-computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">broadband gamma</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">Electrocorticography</style></keyword><keyword><style  face="normal" font="default" size="100%">pattern recognition</style></keyword><keyword><style  face="normal" font="default" size="100%">speech decoding</style></keyword><keyword><style  face="normal" font="default" size="100%">speech production</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2015</style></year><pub-dates><date><style  face="normal" font="default" size="100%">06/2015</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://journal.frontiersin.org/article/10.3389/fnins.2015.00217/abstract</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">It has long been speculated whether communication between humans and machines based on natural speech related cortical activity is possible. Over the past decade, studies have suggested that it is feasible to recognize isolated aspects of speech from neural signals, such as auditory features, phones or one of a few isolated words. However, until now it remained an unsolved challenge to decode continuously spoken speech from the neural substrate associated with speech and language processing. Here, we show for the first time that continuously spoken speech can be decoded into the expressed words from intracranial electrocorticographic (ECoG) recordings.Specifically, we implemented a system, which we call Brain-To-Text that models single phones, employs techniques from automatic speech recognition (ASR), and thereby transforms brain activity while speaking into the corresponding textual representation. Our results demonstrate that our system can achieve word error rates as low as 25% and phone error rates below 50%. Additionally, our approach contributes to the current understanding of the neural basis of continuous speech production by identifying those cortical regions that hold substantial information about individual phones. In conclusion, the Brain-To- Text system described in this paper represents an important step toward human-machine communication based on imagined speech.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Stephen, Emily P</style></author><author><style face="normal" font="default" size="100%">Lepage, Kyle Q</style></author><author><style face="normal" font="default" size="100%">Eden, Uri T</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author><author><style face="normal" font="default" size="100%">Jonathan S Brumberg</style></author><author><style face="normal" font="default" size="100%">Guenther, Frank H</style></author><author><style face="normal" font="default" size="100%">Kramer, Mark A</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses.</style></title><secondary-title><style face="normal" font="default" size="100%">Frontiers in Computational Neuroscience</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">canonical correlation</style></keyword><keyword><style  face="normal" font="default" size="100%">coherence</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">EEG</style></keyword><keyword><style  face="normal" font="default" size="100%">functional connectivity</style></keyword><keyword><style  face="normal" font="default" size="100%">MEG</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">03/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/24678295</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">8</style></volume><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty—both in the functional network edges and the corresponding aggregate measures of network topology—are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here—appropriate for static and dynamic network inference and different statistical measures of coupling—permits the evaluation of confidence in network measures in a variety of settings common to neuroscience.</style></abstract><issue><style face="normal" font="default" size="100%">31</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Wang, Z.</style></author><author><style face="normal" font="default" size="100%">Gunduz, Aysegul</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">A L Ritaccio</style></author><author><style face="normal" font="default" size="100%">Ji, Q</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Decoding Onset and Direction of Movements using Electrocorticographic (ECoG) Signals in Humans.</style></title><secondary-title><style face="normal" font="default" size="100%">Frontiers in Neuroengineering</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">brain computer interface</style></keyword><keyword><style  face="normal" font="default" size="100%">ECoG</style></keyword><keyword><style  face="normal" font="default" size="100%">movement direction prediction</style></keyword><keyword><style  face="normal" font="default" size="100%">movement onset prediction</style></keyword><keyword><style  face="normal" font="default" size="100%">neurorehabilitation</style></keyword><keyword><style  face="normal" font="default" size="100%">performance augmentation</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2012</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/22891058</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">5</style></volume><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Communication of intent usually requires motor function. This requirement can be limiting when a person is engaged in a task, or prohibitive for some people suffering from neuromuscular disorders. Determining a person's intent, e.g., where and when to move, from brain signals rather than from muscles would have important applications in clinical or other domains. For example, detection of the onset and direction of intended movements may provide the basis for restoration of simple grasping function in people with chronic stroke, or could be used to optimize a user's interaction with the surrounding environment. Detecting the onset and direction of actual movements are a first step in this direction. In this study, we demonstrate that we can detect the onset of intended movements and their direction using electrocorticographic (ECoG) signals recorded from the surface of the cortex in humans. We also demonstrate in a simulation that the information encoded in ECoG about these movements may improve performance in a targeting task. In summary, the results in this paper suggest that detection of intended movement is possible, and may serve useful functions.</style></abstract><issue><style face="normal" font="default" size="100%">15</style></issue></record></records></xml>