Multisensory Processing of Human Speech Measured with msec and mm Resolution
Biography
Overview
Face-to-face communication is the most important form of human interaction. When conversing we receive auditory information from the talker's voice and visual information from the talker's face. Combining these two sources of information is difficult, as they arrive quickly (about 5 syllables per second) and the correspondence between the vocal sounds and the mouth movements made by the talker is complex. We propose to study the neural mechanisms that underlie multisensory (auditory and visual) speech perception using electrocorticography (ECoG), a neural recording technique in which electrodes are implanted in the brain of epileptic patients. ECoG is the ideal technique for our research question because it measures human brain activity with very high temporal and spatial resolution (millisecond/millimeter). ECoG can be used to examine the diverse network of brain areas active during speech perception. Electrodes in auditory cortex are strongly activated by the auditory component of speech, while electrodes in occipital lobe are strongly activated by the visual component of speech. Between auditory and visual cortex, posterior lateral temporal cortex is thought to integrate auditory and visual speech. Poor hearing is one of the most common disabilities in veterans. Since speech is the basis of our social relationships, poor speech perception can lead to social isolation, depression and other health problems. A better understanding of the neural mechanisms underlying multisensory speech perception will allow us to improve veterans' ability to understand speech, leading to major improvements in their quality of life. To ensure that our results are immediately applicable to real world situations, we will study brain responses to natural English language words spoken by English talkers. In addition to its potential clinical benefits, the proposed research will also have a significant impact on basic science. Multisensory integration is a major new field in neuroscience and has proven to be fertile ground for mathematical models of brain and behavior. Our work will serve as a bridge between experiments and models of simple multisensory behavior, such as auditory-visual localization and more complex cognitive tasks, exemplified by multisensory speech perception. The successful completion of these studies will represent a major step forward in our understanding of the neural substrates of multisensory speech perception.
Time