Multisensory foundations of speech perception in infancy

婴儿期言语感知的多感官基础

基本信息

  • 批准号:
    8575658
  • 负责人:
  • 金额:
    $ 14.96万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
  • 财政年份:
    2013
  • 资助国家:
    美国
  • 起止时间:
    2013-08-12 至 2015-05-31
  • 项目状态:
    已结题

项目摘要

DESCRIPTION (provided by applicant): Infants are born with a preference for listening to speech over non-speech, and with a set of perceptual sensitivities that enable them to discriminate most of the speech sound differences used in the world's languages, thus preparing them to acquire any language. By 10-months of age infants become experts at perceiving their native language. This involves improvements in discrimination of native consonant contrasts, but more importantly for this grant, a decline in discrimination of non-native consonant distinctions. In the adult, speech perception is richly multimodal. What we hear is influenced by visual information in talking faces, by self-produced articulations, and even by external tactile stimulation. While speech perception is also multisensory in young infants, the genesis of this is debated. According to one view, multisensory perception is established through learned integration: seeing and hearing a particular speech sound allows learning of the commonalities in each. This grant proposes and tests the hypothesis that infant speech perception is multisensory without specific prior learning experience. Debates regarding the ontogeny of human language have centered on the issue of whether the perceptual building blocks of language are acquired through experience or whether they are innate. Yet, this nature vs. nurture controversy is rapidly being replaced by a much more nuanced framework. Here, it is proposed that the earliest developing sensory system - likely somatosensory in the case of speech, including somatosensory feedback from oral-motor movements that are first manifest in the fetus, provides an organization on which auditory speech can build once the peripheral auditory system comes on-line by 22 weeks gestation. Heard speech, both of the maternal voice via bone conduction and of external (filtered) speech through the uterus, is organized in part by this somatosensory/motor foundation. At birth, when vision becomes available, seen speech maps on to this already established foundation. These interconnected perceptual systems, thus, provide a set of parameters for matching heard, seen, and felt speech at birth. Importantly, it is argued that these multisensory perceptual foundations are established for language-general perception: they set in place an organization that provides redundancy among the oral-motor gesture, the visible oral-motor movements, and the auditory percept of any speech sound. Hence, specific learning of individual cross-modal matches is not required. Our thesis, then, is that while multisensory speech perception has a developmental history (and hence is not akin to an 'innate' starting point), the multisensory sensitivities should be in place without specific experience of specific speech sounds. Thus multisensory processing should be as evident for non-native, never-before-experienced speech sounds, as it is for native and hence familiar ones. To test this hypothesis against the alternative hypothesis of learned integration, English infants will be tested on non-native, or unfamiliar speech sound contrasts, and will be compared to Hindi infants, for whom these contrasts are native. Four sets of experiments, each using a multi-modal Distributional Learning paradigm, are proposed. Infants will be tested at 6-months, an age at which they can still discriminate non-native speech sounds, and at 10-months, an age after they begin to fail. It is proposed that if speech perception is multisensory without specific experience, the addition of matching visual, tactile, or motor information should facilitate discrimination of a non-native speech sound contrast at 10-months, while the addition of mismatching information should disrupt discrimination at 6-months. If multisensory speech perception is learned, this pattern should be seen only for Hindi infants, for whom the contrasts are familiar and hence already intersensory. The Specific Aims are to test the influence of: 1) Visual information on Auditory speech perception (Experimental Set 1); 2) Oral-Motor gestures on Auditory speech perception (Experimental Set 2); 3) Oral- Motor gestures on Auditory-Visual speech perception (Experimental Set 3); and 4) Tactile information on Auditory speech perception (Experimental Set 4). This work is of theoretical import for characterizing speech perception development in typically developing infants, and provides a framework for understanding the roots of possible delay in infants born with a sensory or oral-motor impairment. The opportunities provided by, and constraints imposed by an initial multi-sensory speech percept allow infants to rapidly acquire knowledge from their language-learning environment, while a deficit in one of the contributing modalities could compromise optimal speech and language development.
描述(申请人提供):婴儿与生俱来就喜欢听语言而不是非语言,并具有一套感知敏感度,使他们能够辨别世界上大多数语言中使用的语音差异,从而为他们学习任何语言做好准备。到了10个月大的时候,婴儿就成为了感知母语的专家。这包括改善对母语辅音对比的歧视,但对这笔赠款来说,更重要的是减少了对非母语的歧视 辅音区别。在成年人中,言语感知是丰富的多模式。我们听到的声音会受到说话面部视觉信息的影响,也会受到自身发音的影响,甚至会受到外部触觉刺激的影响。虽然幼儿的言语知觉也是多感官的,但这种感觉的起源仍存在争议。根据一种观点,多感官知觉是通过习得的整合来建立的:看到和听到特定的语音可以学习每种声音中的共性。这项资助提出并检验了一种假设,即婴儿的言语感知是多感官的,没有特定的事先学习经验。关于人类语言个体发生的争论集中在语言的感知构件是通过经验获得还是与生俱来的问题上。然而,这种先天与后天的争论正在迅速被一个更加微妙的框架所取代。在这里,有人提出,最早发展起来的感觉系统--可能是言语中的体感,包括最初在胎儿中表现出来的口腔运动的体感反馈--提供了一种组织,一旦怀孕22周的外周听觉系统上线,听觉语言就可以建立在这个组织之上。所听到的语言,无论是通过骨传导的母音还是通过子宫的外部(过滤)语言,部分是由这个躯体感觉/运动基础组织的。在出生时,当视觉变得可用时,看得见的语音映射到这个已经建立的基础上。因此,这些相互关联的感知系统提供了一组参数,用于匹配出生时听到的、看到的和感觉到的语音。重要的是,有人认为这些多感官知觉基础是为语言一般知觉奠定的:它们建立了一个组织,在口头运动手势、可见的口头运动动作和任何语音的听觉感知之间提供冗余。因此,不需要对单个跨模式匹配进行特定学习。因此,我们的论点是,尽管多感官言语知觉有一个发展历史(因此并不类似于一个先天的起点),但多感官敏感度应该存在,而不需要对特定语音的特定经验。因此,对于非母语、从未经历过的语音来说,多感官处理应该是显而易见的,就像对本土语音以及熟悉的语音一样。为了验证这一假说与习得整合假说的对比,英国婴儿将接受非母语或不熟悉的语音对比测试,并与印地语婴儿进行比较,对后者来说,这些对比是母语的。提出了四组实验,每组都使用了一种多模式分布式学习范式。婴儿将在6个月的时候接受测试,在这个年龄他们仍然可以辨别非母语的语音,在10个月的时候,他们开始失败的年龄。有人提出,如果言语知觉是多感官的,而没有具体的 根据经验,添加匹配的视觉、触觉或运动信息应有助于在10个月时辨别非母语语言的声音对比,而添加不匹配的信息应在6个月时干扰辨别。如果学习了多感官语言感知,这种模式应该只在印地语婴儿身上看到,对他们来说,对比是熟悉的,因此已经是感觉间的。本研究的具体目的是检验:1)视觉信息对听觉言语知觉的影响(实验1);2)口述运动手势对听觉言语知觉的影响(实验2);3)口述运动手势对听觉视觉言语知觉的影响(实验3);4)触觉信息对听觉言语知觉的影响(实验4)。这项工作对于描述典型发育婴儿的言语知觉发展具有重要的理论意义,并为理解先天感觉或口腔运动障碍婴儿可能延迟的根源提供了一个框架。最初的多感官言语感知提供的机会和施加的限制使婴儿能够从其语言学习环境中迅速获得知识,而其中一种促成方式的缺陷可能会影响最佳的言语和语言发展。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

JANET F. WERKER其他文献

JANET F. WERKER的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('JANET F. WERKER', 18)}}的其他基金

Multisensory foundations of speech perception in infancy
婴儿期言语感知的多感官基础
  • 批准号:
    8720041
  • 财政年份:
    2013
  • 资助金额:
    $ 14.96万
  • 项目类别:

相似海外基金

Nonlinear Acoustics for the conditioning monitoring of Aerospace structures (NACMAS)
用于航空航天结构调节监测的非线性声学 (NACMAS)
  • 批准号:
    10078324
  • 财政年份:
    2023
  • 资助金额:
    $ 14.96万
  • 项目类别:
    BEIS-Funded Programmes
ORCC: Marine predator and prey response to climate change: Synthesis of Acoustics, Physiology, Prey, and Habitat In a Rapidly changing Environment (SAPPHIRE)
ORCC:海洋捕食者和猎物对气候变化的反应:快速变化环境中声学、生理学、猎物和栖息地的综合(蓝宝石)
  • 批准号:
    2308300
  • 财政年份:
    2023
  • 资助金额:
    $ 14.96万
  • 项目类别:
    Continuing Grant
University of Salford (The) and KP Acoustics Group Limited KTP 22_23 R1
索尔福德大学 (The) 和 KP Acoustics Group Limited KTP 22_23 R1
  • 批准号:
    10033989
  • 财政年份:
    2023
  • 资助金额:
    $ 14.96万
  • 项目类别:
    Knowledge Transfer Partnership
User-controllable and Physics-informed Neural Acoustics Fields for Multichannel Audio Rendering and Analysis in Mixed Reality Application
用于混合现实应用中多通道音频渲染和分析的用户可控且基于物理的神经声学场
  • 批准号:
    23K16913
  • 财政年份:
    2023
  • 资助金额:
    $ 14.96万
  • 项目类别:
    Grant-in-Aid for Early-Career Scientists
Combined radiation acoustics and ultrasound imaging for real-time guidance in radiotherapy
结合辐射声学和超声成像,用于放射治疗的实时指导
  • 批准号:
    10582051
  • 财政年份:
    2023
  • 资助金额:
    $ 14.96万
  • 项目类别:
Comprehensive assessment of speech physiology and acoustics in Parkinson's disease progression
帕金森病进展中言语生理学和声学的综合评估
  • 批准号:
    10602958
  • 财政年份:
    2023
  • 资助金额:
    $ 14.96万
  • 项目类别:
The acoustics of climate change - long-term observations in the arctic oceans
气候变化的声学——北冰洋的长期观测
  • 批准号:
    2889921
  • 财政年份:
    2023
  • 资助金额:
    $ 14.96万
  • 项目类别:
    Studentship
Collaborative Research: Estimating Articulatory Constriction Place and Timing from Speech Acoustics
合作研究:从语音声学估计发音收缩位置和时间
  • 批准号:
    2343847
  • 财政年份:
    2023
  • 资助金额:
    $ 14.96万
  • 项目类别:
    Standard Grant
Flow Physics and Vortex-Induced Acoustics in Bio-Inspired Collective Locomotion
仿生集体运动中的流动物理学和涡激声学
  • 批准号:
    DGECR-2022-00019
  • 财政年份:
    2022
  • 资助金额:
    $ 14.96万
  • 项目类别:
    Discovery Launch Supplement
Collaborative Research: Estimating Articulatory Constriction Place and Timing from Speech Acoustics
合作研究:从语音声学估计发音收缩位置和时间
  • 批准号:
    2141275
  • 财政年份:
    2022
  • 资助金额:
    $ 14.96万
  • 项目类别:
    Standard Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了