<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lotte, Fabien</style></author><author><style face="normal" font="default" size="100%">Jonathan S Brumberg</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Gunduz, Aysegul</style></author><author><style face="normal" font="default" size="100%">A L Ritaccio</style></author><author><style face="normal" font="default" size="100%">Guan, Cuntai</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Electrocorticographic representations of segmental features in continuous speech.</style></title><secondary-title><style face="normal" font="default" size="100%">Front Hum Neurosci</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">electrocorticography (ECoG)</style></keyword><keyword><style  face="normal" font="default" size="100%">manner of articulation</style></keyword><keyword><style  face="normal" font="default" size="100%">place of articulation</style></keyword><keyword><style  face="normal" font="default" size="100%">speech processing</style></keyword><keyword><style  face="normal" font="default" size="100%">voicing</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2015</style></year><pub-dates><date><style  face="normal" font="default" size="100%">02/2015</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/25759647</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">9</style></volume><pages><style face="normal" font="default" size="100%">97</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Acoustic speech output results from coordinated articulation of dozens of muscles, bones and cartilages of the vocal mechanism. While we commonly take the fluency and speed of our speech productions for granted, the neural mechanisms facilitating the requisite muscular control are not completely understood. Previous neuroimaging and electrophysiology studies of speech sensorimotor control has typically concentrated on speech sounds (i.e., phonemes, syllables and words) in isolation; sentence-length investigations have largely been used to inform coincident linguistic processing. In this study, we examined the neural representations of segmental features (place and manner of articulation, and voicing status) in the context of fluent, continuous speech production. We used recordings from the cortical surface [electrocorticography (ECoG)] to simultaneously evaluate the spatial topography and temporal dynamics of the neural correlates of speech articulation that may mediate the generation of hypothesized gestural or articulatory scores. We found that the representation of place of articulation involved broad networks of brain regions during all phases of speech production: preparation, execution and monitoring. In contrast, manner of articulation and voicing status were dominated by auditory cortical responses after speech had been initiated. These results provide a new insight into the articulatory and auditory processes underlying speech production in terms of their motor requirements and acoustic correlates.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Dijkstra, K.</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Gunduz, Aysegul</style></author><author><style face="normal" font="default" size="100%">Coon, W.G.</style></author><author><style face="normal" font="default" size="100%">A L Ritaccio</style></author><author><style face="normal" font="default" size="100%">Farquhar, Jason</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Identifying the Attended Speaker Using Electrocorticographic (ECoG) Signals.</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Neural Engineering</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">auditory attention</style></keyword><keyword><style  face="normal" font="default" size="100%">Brain-computer interface (BCI)</style></keyword><keyword><style  face="normal" font="default" size="100%">Cocktail Party</style></keyword><keyword><style  face="normal" font="default" size="100%">electrocorticography (ECoG)</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2015</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4776341/</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">People affected by severe neuro-degenerative diseases (e.g., late-stage amyotrophic lateral sclerosis (ALS) or locked-in syndrome) eventually lose all muscular control. Thus, they cannot use traditional assistive communication devices that depend on muscle control, or brain-computer interfaces (BCIs) that depend on the ability to control gaze. While auditory and tactile BCIs can provide communication to such individuals, their use typically entails an artificial mapping between the stimulus and the communication intent. This makes these BCIs difficult to learn and use.

In this study, we investigated the use of selective auditory attention to natural speech as an avenue for BCI communication. In this approach, the user communicates by directing his/her attention to one of two simultaneously presented speakers. We used electrocorticographic (ECoG) signals in the gamma band (70–170 Hz) to infer the identity of attended speaker, thereby removing the need to learn such an artificial mapping.

Our results from twelve human subjects show that a single cortical location over superior temporal gyrus or pre-motor cortex is typically sufficient to identify the attended speaker within 10 s and with 77% accuracy (50% accuracy due to chance). These results lay the groundwork for future studies that may determine the real-time performance of BCIs based on selective auditory attention to speech.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Disha Gupta</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Adamo, Matthew A</style></author><author><style face="normal" font="default" size="100%">A L Ritaccio</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Localizing ECoG electrodes on the cortical anatomy without post-implantation imaging.</style></title><secondary-title><style face="normal" font="default" size="100%">Neuroimage Clin</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Neuroimage Clin</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">auditory processing</style></keyword><keyword><style  face="normal" font="default" size="100%">electrocorticography (ECoG)</style></keyword><keyword><style  face="normal" font="default" size="100%">electrode localization</style></keyword><keyword><style  face="normal" font="default" size="100%">fiducials</style></keyword><keyword><style  face="normal" font="default" size="100%">interaoperative localization</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2014</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2014</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/25379417</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">6</style></volume><pages><style face="normal" font="default" size="100%">64-76</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;b&gt;INTRODUCTION: &lt;/b&gt;Electrocorticographic (ECoG) grids are placed subdurally on the cortex in people undergoing cortical resection to delineate eloquent cortex. ECoG signals have high spatial and temporal resolution and thus can be valuable for neuroscientific research. The value of these data is highest when they can be related to the cortical anatomy. Existing methods that establish this relationship rely either on post-implantation imaging using computed tomography (CT), magnetic resonance imaging (MRI) or X-Rays, or on intra-operative photographs. For research purposes, it is desirable to localize ECoG electrodes on the brain anatomy even when post-operative imaging is not available or when intra-operative photographs do not readily identify anatomical landmarks.&lt;/p&gt;&lt;p&gt;&lt;b&gt;METHODS: &lt;/b&gt;We developed a method to co-register ECoG electrodes to the underlying cortical anatomy using only a pre-operative MRI, a clinical neuronavigation device (such as BrainLab VectorVision), and fiducial markers. To validate our technique, we compared our results to data collected from six subjects who also had post-grid implantation imaging available. We compared the electrode coordinates obtained by our fiducial-based method to those obtained using existing methods, which are based on co-registering pre- and post-grid implantation images.&lt;/p&gt;&lt;p&gt;&lt;b&gt;RESULTS: &lt;/b&gt;Our fiducial-based method agreed with the MRI-CT method to within an average of 8.24 mm (mean, median = 7.10 mm) across 6 subjects in 3 dimensions. It showed an average discrepancy of 2.7 mm when compared to the results of the intra-operative photograph method in a 2D coordinate system. As this method does not require post-operative imaging such as CTs, our technique should prove useful for research in intra-operative single-stage surgery scenarios. To demonstrate the use of our method, we applied our method during real-time mapping of eloquent cortex during a single-stage surgery. The results demonstrated that our method can be applied intra-operatively in the absence of post-operative imaging to acquire ECoG signals that can be valuable for neuroscientific investigations.&lt;/p&gt;</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Gunduz, Aysegul</style></author><author><style face="normal" font="default" size="100%">Peter Brunner</style></author><author><style face="normal" font="default" size="100%">Amy Daitch</style></author><author><style face="normal" font="default" size="100%">Leuthardt, E C</style></author><author><style face="normal" font="default" size="100%">A L Ritaccio</style></author><author><style face="normal" font="default" size="100%">Pesaran, Bijan</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Decoding covert spatial attention using electrocorticographic (ECoG) signals in humans.</style></title><secondary-title><style face="normal" font="default" size="100%">Neuroimage</style></secondary-title><alt-title><style face="normal" font="default" size="100%">Neuroimage</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">covert attention</style></keyword><keyword><style  face="normal" font="default" size="100%">electrocorticography (ECoG)</style></keyword><keyword><style  face="normal" font="default" size="100%">visual spatial attention</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">05/2012</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/22366333</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">60</style></volume><pages><style face="normal" font="default" size="100%">2285-93</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;This study shows that electrocorticographic (ECoG) signals recorded from the surface of the brain provide detailed information about shifting of visual attention and its directional orientation in humans. ECoG allows for the identification of the cortical areas and time periods that hold the most information about covert attentional shifts. Our results suggest a transient distributed fronto-parietal mechanism for orienting of attention that is represented by different physiological processes. This neural mechanism encodes not only whether or not a subject shifts their attention to a location, but also the locus of attention. This work contributes to our understanding of the electrophysiological representation of attention in humans. It may also eventually lead to brain-computer interfaces (BCIs) that optimize user interaction with their surroundings or that allow people to communicate choices simply by shifting attention to them.&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">4</style></issue></record></records></xml>