NEURAL SUBSTRATES OF OPTIMAL MULTISENSORY INTEGRATION
最佳多感官整合的神经基质
基本信息
- 批准号:9197698
- 负责人:
- 金额:$ 34.67万
- 依托单位:
- 依托单位国家:美国
- 项目类别:
- 财政年份:2016
- 资助国家:美国
- 起止时间:2016-01-01 至 2020-12-31
- 项目状态:已结题
- 来源:
- 关键词:AuditoryAuditory PerceptionBayesian AnalysisBayesian ModelingBehavioralBrainBrain imagingBrain regionClinicalCognitiveComputer SimulationComputer Vision SystemsDataElectrocorticogramEmployee StrikesEyeEye MovementsFaceFunctional Magnetic Resonance ImagingHearing problemHumanIllusionsIndividualIndividual DifferencesInvestigationKnowledgeLanguageLeftLinguisticsLinkLiteratureMeasuresMediatingModalityModelingMovementNeuronsNoiseOral cavityPerceptionPersonsPopulationPredispositionPropertyPublishingSample SizeSensorySignal TransductionSpeechSpeech PerceptionStimulusStructure of superior temporal sulcusTechniquesTestingTimeVisualVocabularyVoiceaudiovisual speechbasebehavior measurementexperienceflexibilityhearing impairmentmultisensoryneural modelnormal agingoperationpredictive modelingpublic health relevancerelating to nervous systemresponsesample fixationspeech accuracytheoriesvisual informationvisual speech
项目摘要
DESCRIPTION (provided by applicant) Speech perception is one of the most important cognitive operations performed by the human brain and is fundamentally multisensory: when conversing with someone, we use both visual information from their face and auditory information from their voice. Multisensory speech perception is especially important when the auditory component of the speech is noisy, either due to a hearing disorder or normal aging. However, much less is known about the neural computations underlying visual speech perception than about those underlying auditory speech perception. To remedy this gap in existing knowledge, we will use converging evidence from two complementary measures of brain activity, BOLD fMRI and electrocorticography (ECoG). The results of these neural recording studies will be interpreted in the context of a flexible computational model based on the emerging tenet that the brain performs multisensory integration using optimal or Bayesian inference, combining the currently available sensory information with prior experience. In the first Aim, a Bayesian model will be constructed to explain individual differences in multisensory speech perception along three axes: subjects' ability to understand noisy audiovisual speech; subjects' susceptibility to the McGurk effect, a multisensory illusion; and the time spent fixating
the mouth of a talking face. In the second Aim, we will explore the neural encoding of visual speech using voxel-wise forward encoding models of the BOLD fMRI signal. We will develop encoding models to test 7 different theories of visual speech representation from the linguistic and computer vision literature. In the third Aim, we will use ECoG to examine the neural computations for integrating visual and auditory speech, guided by the Bayesian models developed in Aim 1. First, we will study reduced neural variability for multisensory speech predicted by our model. Second, we will study the representational space of unisensory and multisensory speech.
描述(由申请人提供)言语感知是由人类大脑执行的最重要的认知操作之一,并且基本上是多感官的:当与某人交谈时,我们使用来自他们的面部的视觉信息和来自他们的声音的听觉信息。当语音的听觉成分由于听力障碍或正常老化而嘈杂时,多感官语音感知尤其重要。然而,对视觉语音感知的神经计算知之甚少,而对听觉语音感知的神经计算知之甚少。为了弥补现有知识中的这一差距,我们将使用来自两种互补的大脑活动测量方法(BOLD fMRI和ECoG)的融合证据。这些神经记录研究的结果将在灵活的计算模型的背景下进行解释,该模型基于新兴的原则,即大脑使用最佳或贝叶斯推理进行多感官整合,将当前可用的感官信息与先前的经验相结合。 在第一个目标中,将构建一个贝叶斯模型来解释多感官言语感知的个体差异沿着三个轴是:受试者理解嘈杂视听言语的能力;受试者对麦格克效应(一种多感官错觉)的易感性;以及注视时间
一张会说话的脸。 在第二个目标中,我们将探索视觉语音的神经编码使用体素的前向编码模型的BOLD功能磁共振成像信号。我们将开发编码模型来测试来自语言学和计算机视觉文献的7种不同的视觉语音表示理论。在第三个目标中,我们将使用ECoG来检查在目标1中开发的贝叶斯模型的指导下整合视觉和听觉语音的神经计算。首先,我们将研究我们的模型预测的多感官语音的神经变异性降低。其次,我们将研究单感官和多感官言语的表征空间。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Michael S Beauchamp其他文献
Michael S Beauchamp的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Michael S Beauchamp', 18)}}的其他基金
Dynamic Neural Mechanisms of Audiovisual Speech Perception
视听言语感知的动态神经机制
- 批准号:
10405731 - 财政年份:2019
- 资助金额:
$ 34.67万 - 项目类别:
Dynamic Neural Mechanisms of Audiovisual Speech Perception
视听言语感知的动态神经机制
- 批准号:
10676997 - 财政年份:2019
- 资助金额:
$ 34.67万 - 项目类别:
Dynamic Neural Mechanisms of Audiovisual Speech Perception
视听言语感知的动态神经机制
- 批准号:
10459624 - 财政年份:2019
- 资助金额:
$ 34.67万 - 项目类别:
Dynamic Neural Mechanisms of Audiovisual Speech Perception
视听言语感知的动态神经机制
- 批准号:
10016852 - 财政年份:2019
- 资助金额:
$ 34.67万 - 项目类别:
RAVE: A New Open Software Tool for Analysis and Visualization of Electrocorticography Data
RAVE:一种用于皮层电图数据分析和可视化的新型开放软件工具
- 批准号:
9766391 - 财政年份:2018
- 资助金额:
$ 34.67万 - 项目类别:
NEURAL SUBSTRATES OF OPTIMAL MULTISENSORY INTEGRATION
最佳多感官整合的神经基质
- 批准号:
9055439 - 财政年份:2016
- 资助金额:
$ 34.67万 - 项目类别:
Neural Mechanisms of Optimal Multisensory Integration
最佳多感觉整合的神经机制
- 批准号:
8018453 - 财政年份:2010
- 资助金额:
$ 34.67万 - 项目类别:
Neural substrates of optimal multisensory integration
最佳多感觉整合的神经基质
- 批准号:
10735194 - 财政年份:2010
- 资助金额:
$ 34.67万 - 项目类别:
Neural Mechanisms of Optimal Multisensory Integration
最佳多感觉整合的神经机制
- 批准号:
7895476 - 财政年份:2010
- 资助金额:
$ 34.67万 - 项目类别:
Neural Mechanisms of Optimal Multisensory Integration
最佳多感觉整合的神经机制
- 批准号:
8416984 - 财政年份:2010
- 资助金额:
$ 34.67万 - 项目类别:
相似海外基金
The significance of nominally non-responsive neural dynamics in auditory perception and behavior
名义上无反应的神经动力学在听觉感知和行为中的意义
- 批准号:
10677342 - 财政年份:2023
- 资助金额:
$ 34.67万 - 项目类别:
Integrated understanding of the brain network: from auditory perception
大脑网络的综合理解:来自听觉感知
- 批准号:
22K18399 - 财政年份:2022
- 资助金额:
$ 34.67万 - 项目类别:
Grant-in-Aid for Challenging Research (Pioneering)
Individual differences across the lifespan in auditory perception
听觉感知在整个生命周期中的个体差异
- 批准号:
RGPIN-2019-04474 - 财政年份:2022
- 资助金额:
$ 34.67万 - 项目类别:
Discovery Grants Program - Individual
Identifying and Modulating Neural Signatures of Auditory Perception
识别和调节听觉感知的神经特征
- 批准号:
559898-2021 - 财政年份:2022
- 资助金额:
$ 34.67万 - 项目类别:
Alexander Graham Bell Canada Graduate Scholarships - Doctoral
Contribution of a corticofugal pathway to auditory perception
离皮质通路对听觉感知的贡献
- 批准号:
10571844 - 财政年份:2022
- 资助金额:
$ 34.67万 - 项目类别:
Predictive Spatiotemporal Models of Auditory Perception
听觉感知的预测时空模型
- 批准号:
548180-2020 - 财政年份:2022
- 资助金额:
$ 34.67万 - 项目类别:
Alexander Graham Bell Canada Graduate Scholarships - Doctoral
Establishing the role of NDNF neurons in shaping auditory perception
确定 NDNF 神经元在塑造听觉感知中的作用
- 批准号:
558008-2021 - 财政年份:2022
- 资助金额:
$ 34.67万 - 项目类别:
Postdoctoral Fellowships
Using touch to enhance auditory perception
使用触摸来增强听觉感知
- 批准号:
EP/W032422/1 - 财政年份:2022
- 资助金额:
$ 34.67万 - 项目类别:
Research Grant
Cortical Hyperexcitability Underlies Aberrant Auditory Perception
皮质过度兴奋是听觉感知异常的基础
- 批准号:
2700691 - 财政年份:2022
- 资助金额:
$ 34.67万 - 项目类别:
Studentship
Musical abilities, language and reading skills: an analysis of auditory perception using FFR
音乐能力、语言和阅读技能:使用 FFR 分析听觉感知
- 批准号:
575730-2022 - 财政年份:2022
- 资助金额:
$ 34.67万 - 项目类别:
Alexander Graham Bell Canada Graduate Scholarships - Master's