<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Pei, Xiao-Mei</style></author><author><style face="normal" font="default" size="100%">Jeremy Jeremy Hill</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Silent Communication: toward using brain signals.</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE Pulse</style></secondary-title><alt-title><style face="normal" font="default" size="100%">IEEE Pulse</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Animals</style></keyword><keyword><style  face="normal" font="default" size="100%">Brain</style></keyword><keyword><style  face="normal" font="default" size="100%">Brain Waves</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Movement</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2012</style></year><pub-dates><date><style  face="normal" font="default" size="100%">01/2012</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/22344951</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">3</style></volume><pages><style face="normal" font="default" size="100%">43-6</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;From the 1980s movie Firefox to the more recent Avatar, popular science fiction has speculated about the possibility of a persons thoughts being read directly from his or her brain. Such braincomputer interfaces (BCIs) might allow people who are paralyzed to communicate with and control their environment, and there might also be applications in military situations wherever silent user-to-user communication is desirable. Previous studies have shown that BCI systems can use brain signals related to movements and movement imagery or attention-based character selection. Although these systems have successfully demonstrated the possibility to control devices using brain function, directly inferring which word a person intends to communicate has been elusive. A BCI using imagined speech might provide such a practical, intuitive device. Toward this goal, our studies to date addressed two scientific questions: (1) Can brain signals accurately characterize different aspects of speech? (2) Is it possible to predict spoken or imagined words or their components using brain signals?&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Pei, Xiao-Mei</style></author><author><style face="normal" font="default" size="100%">Barbour, Dennis L</style></author><author><style face="normal" font="default" size="100%">Leuthardt, E C</style></author><author><style face="normal" font="default" size="100%">Gerwin Schalk</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans.</style></title><secondary-title><style face="normal" font="default" size="100%">J Neural Eng</style></secondary-title><alt-title><style face="normal" font="default" size="100%">J Neural Eng</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Adolescent</style></keyword><keyword><style  face="normal" font="default" size="100%">Adult</style></keyword><keyword><style  face="normal" font="default" size="100%">Brain</style></keyword><keyword><style  face="normal" font="default" size="100%">Brain Mapping</style></keyword><keyword><style  face="normal" font="default" size="100%">Cerebral Cortex</style></keyword><keyword><style  face="normal" font="default" size="100%">Communication Aids for Disabled</style></keyword><keyword><style  face="normal" font="default" size="100%">Data Interpretation, Statistical</style></keyword><keyword><style  face="normal" font="default" size="100%">Discrimination (Psychology)</style></keyword><keyword><style  face="normal" font="default" size="100%">Electrodes, Implanted</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">Epilepsy</style></keyword><keyword><style  face="normal" font="default" size="100%">Female</style></keyword><keyword><style  face="normal" font="default" size="100%">Functional Laterality</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Male</style></keyword><keyword><style  face="normal" font="default" size="100%">Middle Aged</style></keyword><keyword><style  face="normal" font="default" size="100%">Movement</style></keyword><keyword><style  face="normal" font="default" size="100%">Speech Perception</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2011</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2011</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/21750369</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">8</style></volume><pages><style face="normal" font="default" size="100%">046028</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;Several stories in the popular media have speculated that it may be possible to infer from the brain which word a person is speaking or even thinking. While recent studies have demonstrated that brain signals can give detailed information about actual and imagined actions, such as different types of limb movements or spoken words, concrete experimental evidence for the possibility to 'read the mind', i.e. to interpret internally-generated speech, has been scarce. In this study, we found that it is possible to use signals recorded from the surface of the brain (electrocorticography) to discriminate the vowels and consonants embedded in spoken and in imagined words, and we defined the cortical areas that held the most information about discrimination of vowels and consonants. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech.&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">4</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Pei, Xiao-Mei</style></author><author><style face="normal" font="default" size="100%">Zheng, Shi Dong</style></author><author><style face="normal" font="default" size="100%">Xu, Jin</style></author><author><style face="normal" font="default" size="100%">Bin, Guang-yu</style></author><author><style face="normal" font="default" size="100%">Zuoguan Wang</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Multi-channel linear descriptors for event-related EEG collected in brain computer interface.</style></title><secondary-title><style face="normal" font="default" size="100%">J Neural Eng</style></secondary-title><alt-title><style face="normal" font="default" size="100%">J Neural Eng</style></alt-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Algorithms</style></keyword><keyword><style  face="normal" font="default" size="100%">Electroencephalography</style></keyword><keyword><style  face="normal" font="default" size="100%">Evoked Potentials, Motor</style></keyword><keyword><style  face="normal" font="default" size="100%">Humans</style></keyword><keyword><style  face="normal" font="default" size="100%">Imagination</style></keyword><keyword><style  face="normal" font="default" size="100%">Motor Cortex</style></keyword><keyword><style  face="normal" font="default" size="100%">Movement</style></keyword><keyword><style  face="normal" font="default" size="100%">Pattern Recognition, Automated</style></keyword><keyword><style  face="normal" font="default" size="100%">Reproducibility of Results</style></keyword><keyword><style  face="normal" font="default" size="100%">Sensitivity and Specificity</style></keyword><keyword><style  face="normal" font="default" size="100%">User-Computer Interface</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2006</style></year><pub-dates><date><style  face="normal" font="default" size="100%">03/2006</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.ncbi.nlm.nih.gov/pubmed/16510942</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">3</style></volume><pages><style face="normal" font="default" size="100%">52-8</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;p&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;By three multi-channel linear descriptors, i.e. spatial&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;complexity&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;(omega), field power (sigma) and frequency of field changes (phi),&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;event-related&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;EEG&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;data&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;within 8-30 Hz were investigated during imagination of left or right hand movement. Studies on the&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;event-related&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;EEG&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;data&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;indicate that a two-channel version of omega, sigma and phi could reflect the antagonistic ERD/ERS patterns over contralateral and ipsilateral areas and also characterize different phases of the changing brain states in the&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;event-related&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;paradigm. Based on the selective two-channel linear descriptors, the left and right hand motor imagery tasks are classified to obtain satisfactory results, which testify the validity of the three linear descriptors omega, sigma and phi for characterizing&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;event-related&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;EEG&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;. The preliminary results show that omega, sigma together with phi have good separability for left and right hand motor imagery tasks, which could be considered for classification of two classes of&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;highlight&quot; style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;EEG&lt;/span&gt;&lt;span style=&quot;font-family: arial, helvetica, clean, sans-serif; font-size: 13px; line-height: 17px;&quot;&gt;&amp;nbsp;patterns in the application of brain computer interfaces.&lt;/span&gt;&lt;/p&gt;</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue></record></records></xml>