Neural mechanisms: Learned audio-visuo-motor integration
神经机制:习得的视听运动整合
基本信息
- 批准号:7033274
- 负责人:
- 金额:$ 71.54万
- 依托单位:
- 依托单位国家:美国
- 项目类别:
- 财政年份:2006
- 资助国家:美国
- 起止时间:2006-01-01 至 2009-12-31
- 项目状态:已结题
- 来源:
- 关键词:association learningbehavioral /social science research tagbioimaging /biomedical imagingbrain electrical activitybrain imaging /visualization /scanningbrain mappingclinical researchcomputational neurosciencefunctional magnetic resonance imaginghuman subjectmagnetoencephalographymemoryneural information processingneuropsychologynonEnglish languagepsychomotor functionpsychophysiologysound perceptionspeechverbal behaviorvisual perception
项目摘要
DESCRIPTION (provided by applicant): This study answers fundamental questions of large-scale neural networks in the human brain supporting crossmodal cognition. To reveal how auditory and visual stimuli and motor acts are arbitrarily combined as a result of crossmodal learning and integrated to supramodal symbolic representations, we will study the neural representations of the letters of the Roman alphabet. These consist of four unimodal representations (visual, auditory, and motor representations for writing and speaking) and learned connections between these, that is, the processes that underlie their audiovisual recognition and motor production. Accurate experimental control is facilitated by the fact that letters exhibit all the necessary properties of symbolic crossmodal representations but in a physically simple and exact format carrying no semantic associations that could confound the neurophysiological interpretation of results. Combined 3-Tesla functional magnetic resonance imaging (fMRI) and 306-channel magnetoencephalographic / 128-channel electroencephalographic (MEG/EEG) techniques with simultaneous behavioral recordings will be applied to pinpoint the exact underlying neural mechanisms. This approach combines the advantages of spatially accurate fMRI with temporally specific MEG/EEG, enabling accurate spatiotemporal characterization of brain activity totally noninvasively. To directly observe how large-scale neurocognitive networks evolve during crossmodal associative learning, we will also conduct fMRI/MEG/EEG measurements before and after our subjects are taught (previously unfamiliar) Japanese kana-letters. The specific aims are to elucidate structure, function, and oscillatory mechanisms of fully established crossmodal neural networks based on previous extensive associative learning (Roman letters) and currently evolving networks representing novel crossmodal associations (Japanese letters). We will characterize the relative roles of deep brain nuclei, cerebellum, medial temporal lobe, and sensory-specific and multisensory association cortices in such networks. The multidimensional experimental design allows isolation of neural mechanisms utilized by perception, working memory, memory encoding, and recall.
描述(由申请人提供):这项研究回答了支持跨模态认知的人脑中大规模神经网络的基本问题。为了揭示听觉和视觉刺激以及运动行为是如何作为跨模态学习的结果而任意组合并整合到超模态符号表征中的,我们将研究罗马字母表字母的神经表征。这些包括四种单峰表征(书写和说话的视觉、听觉和运动表征)以及它们之间的习得性联系,即构成其视听识别和运动产生基础的过程。字母表现出符号跨模态表示的所有必要属性,但以物理上简单而精确的格式,不携带可能混淆结果的神经生理学解释的语义关联,这一事实有助于准确的实验控制。结合3特斯拉功能磁共振成像(fMRI)和306通道脑磁图/ 128通道脑电图(MEG/EEG)技术与同步行为记录将被应用于查明确切的潜在神经机制。这种方法结合了空间准确的功能磁共振成像与时间特异性脑磁图/脑电图的优势,使准确的时空表征大脑活动完全无创。为了直接观察大规模神经认知网络在跨模态联想学习过程中的演变,我们还将在我们的受试者被教授(以前不熟悉的)日语假名字母之前和之后进行fMRI/MEG/EEG测量。具体的目标是阐明结构,功能和振荡机制的完全建立的跨模态神经网络的基础上以前广泛的联想学习(罗马字母)和目前不断发展的网络代表新的跨模态协会(日本字母)。我们将描述脑深部核团,小脑,内侧颞叶,感觉特异性和多感觉关联皮层在这样的网络中的相对作用。多维的实验设计允许隔离的神经机制所利用的知觉,工作记忆,记忆编码,回忆。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
JOHN W BELLIVEAU其他文献
JOHN W BELLIVEAU的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('JOHN W BELLIVEAU', 18)}}的其他基金
MRI-Navigated 2-Channel TMS with 60-channel EEG Instrument
配备 60 通道 EEG 仪器的 MRI 导航 2 通道 TMS
- 批准号:
7389324 - 财政年份:2008
- 资助金额:
$ 71.54万 - 项目类别:
Neural mechanisms: Learned audio-visuo-motor integration
神经机制:习得的视听运动整合
- 批准号:
7166038 - 财政年份:2006
- 资助金额:
$ 71.54万 - 项目类别:
Neural mechanisms: Learned audio-visuo-motor integration
神经机制:习得的视听运动整合
- 批准号:
7352668 - 财政年份:2006
- 资助金额:
$ 71.54万 - 项目类别:
Neural mechanisms: Learned audio-visuo-motor integration
神经机制:习得的视听运动整合
- 批准号:
7547049 - 财政年份:2006
- 资助金额:
$ 71.54万 - 项目类别:
Spatiotemporal Brain Imaging of Human Auditory Cognition
人类听觉认知的时空脑成像
- 批准号:
6779801 - 财政年份:2002
- 资助金额:
$ 71.54万 - 项目类别:
Spatiotemporal Brain Imaging of Human Auditory Cognition
人类听觉认知的时空脑成像
- 批准号:
7910638 - 财政年份:2002
- 资助金额:
$ 71.54万 - 项目类别:
Spatiotemporal Brain Imaging of Human Auditory Cognition
人类听觉认知的时空脑成像
- 批准号:
7092611 - 财政年份:2002
- 资助金额:
$ 71.54万 - 项目类别:
Spatiotemporal Brain Imaging of Human Auditory Cognition
人类听觉认知的时空脑成像
- 批准号:
7581838 - 财政年份:2002
- 资助金额:
$ 71.54万 - 项目类别: