Neural substrates of optimal multisensory integration

最佳多感觉整合的神经基质

基本信息

  • 批准号:
    10735194
  • 负责人:
  • 金额:
    $ 58.19万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
  • 财政年份:
    2010
  • 资助国家:
    美国
  • 起止时间:
    2010-02-01 至 2028-06-30
  • 项目状态:
    未结题

项目摘要

Project Summary When communicating face-to-face, humans receive information in two sensory modalities: visual information from the face of the talker; and auditory information from the voice of the talker. The pandemic has brought into sharp focus the importance of audiovisual speech: mask wearing obscures the talker's face while muffling the voice, a double whammy that hinders communication. The popularity of video conferencing software attests to the importance placed by people on seeing a talker's face as well as hearing their voice. Although audiovisual speech perception is very important, we know little about the neural mechanisms for this uniquely human ability. We will remedy this gap in knowledge using most powerful techniques in human neuroscience: computational modeling; behavioral studies; ultra-high-field (7 tesla) functional MRI (fMRI); and the examination of epilepsy patients who have electrodes implanted in their brain for the treatment of medically intractable epilepsy, a technique referred to as intracranial electroencephalography (iEEG). The anatomical focus of the proposal is the posterior superior temporal sulcus/gyrus (STS/G), known since the time of Wernicke to be important for speech perception. The behavioral focus of our proposal is the new discovery that a classic audiovisual speech illusion, known as the McGurk effect, can produce dramatic, long-lasting changes in auditory-only speech perception, turning a ba into a da. This phenomenon, termed fusion-induced recalibration (FIR), provides a tool to advance computational and neural studies of speech perception. The first aim will develop computational models of speech perception and test against behavioral data. Different models will be fit to speech perception data before, during and after exposure to the McGurk effect. Fitted models will be compared using held-out behavioral data. Because the models instantiate different theoretical constructs, model comparison will determine which explanatory constructs are essential. These results will provide a solid theoretical grounding for future studies, including those in Aims 2 and 3: searching for a neural correlate of an unjustified construct is likely to be fruitless. The second aim will examine speech perceptin through the lens of patterns of activity in STS/G measured with 7 tesla fMRI. We expect to observe reliable changes in STS/G response patterns before and after exposure to the McGurk effect, reflecting modification of speech representations (in contrast, in cortical areas driven solely by acoustic features, McGurk exposure should not change fMRI response patterns.) The third aim will use iEEG to record broadband high-frequency activity (BHA) from small populations of STS/G neurons with high temporal resolution. Responses to the auditory- only component of the McGurk speech (but not control speech) are predicted to show sustained decreases after successive blocks of audiovisual McGurk exposure, in lockstep with the perceptual development of FIR.
项目摘要 当面对面交流时,人类通过两种感官方式接收信息:视觉信息 来自说话者的面部;以及来自说话者的声音的听觉信息。大流行病带来了 视听演讲的重要性:戴面具掩盖了说话者的脸, 声音,阻碍沟通的双重打击。视频会议软件的普及 这证明了人们对说话者的脸和声音的重视。虽然 视听言语感知是非常重要的,我们知道的很少,这独特的神经机制 人类的能力。我们将使用人类神经科学中最强大的技术来弥补这一知识差距: 计算建模;行为研究;超高场(7特斯拉)功能性MRI(fMRI);以及 检查癫痫病人谁有电极植入他们的大脑治疗药物 难治性癫痫,一种被称为颅内脑电图(iEEG)的技术。解剖 该建议的重点是后上级颞沟/回(STS/G),自1989年以来就被人们所知。 韦尼克是重要的言语知觉。我们建议的行为焦点是新发现, 一种经典的视听言语错觉,被称为麦格克效应,可以产生戏剧性的,持久的变化, 在纯语音感知中,把ba变成da。这种现象,称为融合诱导 重新校准(FIR),提供了一个工具,以推进计算和神经研究的语音感知。第一 aim将开发言语感知的计算模型,并根据行为数据进行测试。不同型号 将适合于暴露于McGurk效应之前、期间和之后的言语感知数据。模型将 使用被保留的行为数据进行比较。由于模型实例化了不同的理论结构, 模型比较将确定哪些解释性结构是必不可少的。这些结果将提供一个坚实的 为未来的研究奠定理论基础,包括目标2和3中的研究: 不合理的建设很可能是徒劳的。第二个目标是通过以下透镜来考察言语知觉 STS/G的活动模式用7特斯拉的功能磁共振成像测量。我们希望在STS/G中观察到可靠的变化 暴露于McGurk效应前后的反应模式,反映了言语的修改 表示(相反,在仅由声学特征驱动的皮层区域中,McGurk暴露不应 改变fMRI反应模式。)第三个目标将使用iEEG记录宽带高频活动 (BHA)从STS/G神经元的小群体具有高时间分辨率。对听觉的反应- 只有McGurk语言的组成部分(而不是控制语言)被预测显示出持续下降后, 连续块的视听麦格克曝光,在锁步与知觉发展的FIR。

项目成果

期刊论文数量(23)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Multivariate fMRI responses in superior temporal cortex predict visual contributions to, and individual differences in, the intelligibility of noisy speech.
  • DOI:
    10.1016/j.neuroimage.2023.120271
  • 发表时间:
    2023-09
  • 期刊:
  • 影响因子:
    5.7
  • 作者:
    Zhang, Yue;Rennig, Johannes;Magnotti, John F.;Beauchamp, Michael S.
  • 通讯作者:
    Beauchamp, Michael S.
The social mysteries of the superior temporal sulcus.
  • DOI:
    10.1016/j.tics.2015.07.002
  • 发表时间:
    2015-09
  • 期刊:
  • 影响因子:
    19.9
  • 作者:
    Beauchamp MS
  • 通讯作者:
    Beauchamp MS
The noisy encoding of disparity model of the McGurk effect.
  • DOI:
    10.3758/s13423-014-0722-2
  • 发表时间:
    2015-06
  • 期刊:
  • 影响因子:
    3.5
  • 作者:
    Magnotti JF;Beauchamp MS
  • 通讯作者:
    Beauchamp MS
A neural link between feeling and hearing.
  • DOI:
    10.1093/cercor/bhs166
  • 发表时间:
    2013-07
  • 期刊:
  • 影响因子:
    3.7
  • 作者:
    T. Ro;T. Ellmore;M. Beauchamp
  • 通讯作者:
    T. Ro;T. Ellmore;M. Beauchamp
Erratum to: Similar frequency of the McGurk effect in large samples of native Mandarin Chinese and American English speakers.
勘误:麦格克效应在中国普通话和美国英语母语人士的大样本中出现的频率相似。
  • DOI:
    10.1007/s00221-016-4634-4
  • 发表时间:
    2016
  • 期刊:
  • 影响因子:
    2
  • 作者:
    Magnotti,JohnF;Mallick,DebshilaBasu;Feng,Guo;Zhou,Bin;Zhou,Wen;Beauchamp,MichaelS
  • 通讯作者:
    Beauchamp,MichaelS
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Michael S Beauchamp其他文献

Michael S Beauchamp的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Michael S Beauchamp', 18)}}的其他基金

Dynamic Neural Mechanisms of Audiovisual Speech Perception
视听言语感知的动态神经机制
  • 批准号:
    10405731
  • 财政年份:
    2019
  • 资助金额:
    $ 58.19万
  • 项目类别:
Dynamic Neural Mechanisms of Audiovisual Speech Perception
视听言语感知的动态神经机制
  • 批准号:
    10676997
  • 财政年份:
    2019
  • 资助金额:
    $ 58.19万
  • 项目类别:
Dynamic Neural Mechanisms of Audiovisual Speech Perception
视听言语感知的动态神经机制
  • 批准号:
    10459624
  • 财政年份:
    2019
  • 资助金额:
    $ 58.19万
  • 项目类别:
Dynamic Neural Mechanisms of Audiovisual Speech Perception
视听言语感知的动态神经机制
  • 批准号:
    10016852
  • 财政年份:
    2019
  • 资助金额:
    $ 58.19万
  • 项目类别:
RAVE: A New Open Software Tool for Analysis and Visualization of Electrocorticography Data
RAVE:一种用于皮层电图数据分析和可视化的新型开放软件工具
  • 批准号:
    9766391
  • 财政年份:
    2018
  • 资助金额:
    $ 58.19万
  • 项目类别:
NEURAL SUBSTRATES OF OPTIMAL MULTISENSORY INTEGRATION
最佳多感官整合的神经基质
  • 批准号:
    9197698
  • 财政年份:
    2016
  • 资助金额:
    $ 58.19万
  • 项目类别:
NEURAL SUBSTRATES OF OPTIMAL MULTISENSORY INTEGRATION
最佳多感官整合的神经基质
  • 批准号:
    9055439
  • 财政年份:
    2016
  • 资助金额:
    $ 58.19万
  • 项目类别:
Neural Mechanisms of Optimal Multisensory Integration
最佳多感觉整合的神经机制
  • 批准号:
    8018453
  • 财政年份:
    2010
  • 资助金额:
    $ 58.19万
  • 项目类别:
Neural Mechanisms of Optimal Multisensory Integration
最佳多感觉整合的神经机制
  • 批准号:
    7895476
  • 财政年份:
    2010
  • 资助金额:
    $ 58.19万
  • 项目类别:
Neural Mechanisms of Optimal Multisensory Integration
最佳多感觉整合的神经机制
  • 批准号:
    8416984
  • 财政年份:
    2010
  • 资助金额:
    $ 58.19万
  • 项目类别:

相似海外基金

Medication Adherence and Cardio-Metabolic Control Indicators among Adult American Indians Receiving Tribal Health Services
接受部落卫生服务的成年美洲印第安人的药物依从性和心脏代谢控制指标
  • 批准号:
    10419967
  • 财政年份:
    2022
  • 资助金额:
    $ 58.19万
  • 项目类别:
A neuroimaging approach to advance mechanistic understanding of tobacco use escalation risk among young adult African American vapers
一种神经影像学方法,可促进对年轻非洲裔美国电子烟使用者烟草使用升级风险的机制理解
  • 批准号:
    10509308
  • 财政年份:
    2022
  • 资助金额:
    $ 58.19万
  • 项目类别:
Understanding social undermining of weight management behaviors in young adult African American women
了解年轻非洲裔美国女性体重管理行为的社会破坏
  • 批准号:
    10680412
  • 财政年份:
    2022
  • 资助金额:
    $ 58.19万
  • 项目类别:
Understanding social undermining of weight management behaviors in young adult African American women
了解年轻非洲裔美国女性体重管理行为的社会破坏
  • 批准号:
    10535890
  • 财政年份:
    2022
  • 资助金额:
    $ 58.19万
  • 项目类别:
A neuroimaging approach to advance mechanistic understanding of tobacco use escalation risk among young adult African American vapers
一种神经影像学方法,可促进对年轻非洲裔美国电子烟使用者烟草使用升级风险的机制理解
  • 批准号:
    10629374
  • 财政年份:
    2022
  • 资助金额:
    $ 58.19万
  • 项目类别:
Medication Adherence and Cardio-Metabolic Control Indicators among Adult American Indians Receiving Tribal Health Services
接受部落卫生服务的成年美洲印第安人的药物依从性和心脏代谢控制指标
  • 批准号:
    10592441
  • 财政年份:
    2022
  • 资助金额:
    $ 58.19万
  • 项目类别:
Impact of Adult Day Services on Psychosocial and Physiological Measures of Stress among African American Dementia Family Caregivers
成人日间服务对非裔美国痴呆症家庭护理人员的社会心理和生理压力测量的影响
  • 批准号:
    10553725
  • 财政年份:
    2021
  • 资助金额:
    $ 58.19万
  • 项目类别:
Voice-Activated Technology to Improve Mobility & Reduce Health Disparities: EngAGEing African American Older Adult-Care Partner Dyads
语音激活技术可提高移动性
  • 批准号:
    10494191
  • 财政年份:
    2021
  • 资助金额:
    $ 58.19万
  • 项目类别:
Impact of Adult Day Services on Psychosocial and Physiological Measures of Stress among African American Dementia Family Caregivers
成人日间服务对非裔美国痴呆症家庭护理人员的社会心理和生理压力测量的影响
  • 批准号:
    10328955
  • 财政年份:
    2021
  • 资助金额:
    $ 58.19万
  • 项目类别:
Voice-Activated Technology to Improve Mobility & Reduce Health Disparities: EngAGEing African American Older Adult-Care Partner Dyads
语音激活技术可提高移动性
  • 批准号:
    10437374
  • 财政年份:
    2021
  • 资助金额:
    $ 58.19万
  • 项目类别:
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了