Multisensory foundations of speech perception in infancy
婴儿期言语感知的多感官基础
基本信息
- 批准号:8720041
- 负责人:
- 金额:$ 14.17万
- 依托单位:
- 依托单位国家:美国
- 项目类别:
- 财政年份:2013
- 资助国家:美国
- 起止时间:2013-08-12 至 2017-05-31
- 项目状态:已结题
- 来源:
- 关键词:AccountingAcousticsAdultAge-MonthsAirAssociation LearningAuditoryAuditory systemBirthBone ConductionDevelopmentDiscriminationEnvironmentEsthesiaFaceFeedbackFetusFinancial compensationFoundationsGesturesGrantGrowthHearingHumanIndividualInfantInterventionJointsKnowledgeLanguageLanguage DevelopmentLanguage DisordersLearningLifeLiteratureMapsMediatingModalityMotorMovementNatureOutcomePatternPerceptionPeripheralPlant RootsPregnancyProcessProductionRecording of previous eventsRestSensorySignal TransductionSpeechSpeech DevelopmentSpeech PerceptionSpeech SoundSystemTactileTestingUterusVisionVisualVoiceWorkauditory discriminationbaseclassical conditioningexperienceinfancylanguage perceptionmotor impairmentmultisensoryoral motorpreferencepublic health relevanceresearch studysensory systemsomatosensorysoundspeech processingtraitvisual informationvisual motor
项目摘要
DESCRIPTION (provided by applicant): Infants are born with a preference for listening to speech over non-speech, and with a set of perceptual sensitivities that enable them to discriminate most of the speech sound differences used in the world's languages, thus preparing them to acquire any language. By 10-months of age infants become experts at perceiving their native language. This involves improvements in discrimination of native consonant contrasts, but more importantly for this grant, a decline in discrimination of non-native
consonant distinctions. In the adult, speech perception is richly multimodal. What we hear is influenced by visual information in talking faces, by self-produced articulations, and even by external tactile stimulation. While speech perception is also multisensory in young infants, the genesis of this is debated. According to one view, multisensory perception is established through learned integration: seeing and hearing a particular speech sound allows learning of the commonalities in each. This grant proposes and tests the hypothesis that infant speech perception is multisensory without specific prior learning experience. Debates regarding the ontogeny of human language have centered on the issue of whether the perceptual building blocks of language are acquired through experience or whether they are innate. Yet, this nature vs. nurture controversy is rapidly being replaced by a much more nuanced framework. Here, it is proposed that the earliest developing sensory system - likely somatosensory in the case of speech, including somatosensory feedback from oral-motor movements that are first manifest in the fetus, provides an organization on which auditory speech can build once the peripheral auditory system comes on-line by 22 weeks gestation. Heard speech, both of the maternal voice via bone conduction and of external (filtered) speech through the uterus, is organized in part by this somatosensory/motor foundation. At birth, when vision becomes available, seen speech maps on to this already established foundation. These interconnected perceptual systems, thus, provide a set of parameters for matching heard, seen, and felt speech at birth. Importantly, it is argued that these multisensory perceptual foundations are established for language-general perception: they set in place an organization that provides redundancy among the oral-motor gesture, the visible oral-motor movements, and the auditory percept of any speech sound. Hence, specific learning of individual cross-modal matches is not required. Our thesis, then, is that while multisensory speech perception has a developmental history (and hence is not akin to an 'innate' starting point), the multisensory sensitivities should be in place without specific experience of specific speech sounds. Thus multisensory processing should be as evident for non-native, never-before-experienced speech sounds, as it is for native and hence familiar ones. To test this hypothesis against the alternative hypothesis of learned integration, English infants will be tested on non-native, or unfamiliar speech sound contrasts, and will be compared to Hindi infants, for whom these contrasts are native. Four sets of experiments, each using a multi-modal Distributional Learning paradigm, are proposed. Infants will be tested at 6-months, an age at which they can still discriminate non-native speech sounds, and at 10-months, an age after they begin to fail. It is proposed that if speech perception is multisensory without specific
experience, the addition of matching visual, tactile, or motor information should facilitate discrimination of a non-native speech sound contrast at 10-months, while the addition of mismatching information should disrupt discrimination at 6-months. If multisensory speech perception is learned, this pattern should be seen only for Hindi infants, for whom the contrasts are familiar and hence already intersensory. The Specific Aims are to test the influence of: 1) Visual information on Auditory speech perception (Experimental Set 1); 2) Oral-Motor gestures on Auditory speech perception (Experimental Set 2); 3) Oral- Motor gestures on Auditory-Visual speech perception (Experimental Set 3); and 4) Tactile information on Auditory speech perception (Experimental Set 4). This work is of theoretical import for characterizing speech perception development in typically developing infants, and provides a framework for understanding the roots of possible delay in infants born with a sensory or oral-motor impairment. The opportunities provided by, and constraints imposed by an initial multi-sensory speech percept allow infants to rapidly acquire knowledge from their language-learning environment, while a deficit in one of the contributing modalities could compromise optimal speech and language development.
描述(由申请人提供):婴儿生来就偏好听语音而不是非语音,并且具有一系列感知敏感性,使他们能够区分世界语言中使用的大多数语音差异,从而为他们习得任何语言做好准备。到 10 个月大时,婴儿就成为感知母语的专家。这涉及到对母语辅音对比的辨别力的改善,但对于本次资助更重要的是,对非母语辅音的辨别力的下降
辅音区别。成年人的言语感知具有丰富的多模态特征。我们听到的声音受到说话面孔中的视觉信息、自身产生的发音,甚至外部触觉刺激的影响。虽然幼儿的言语感知也是多感官的,但其起源仍存在争议。根据一种观点,多感官知觉是通过学习性整合建立的:看到和听到特定的语音可以让我们了解每种语音的共性。该资助提出并测试了婴儿言语感知是多感官的假设,无需特定的先前学习经验。关于人类语言本体论的争论集中在以下问题上:语言的感知构件是通过经验获得的,还是与生俱来的。然而,这种先天与后天的争论正在迅速被一个更加微妙的框架所取代。在此,有人提出,最早发育的感觉系统——就言语而言可能是体感系统,包括胎儿中首先表现出来的口腔运动运动的体感反馈,一旦外周听觉系统在妊娠 22 周开始上线,就可以提供一种听觉言语的组织。听到的语言,包括通过骨传导的母体声音和通过子宫的外部(过滤后)语言,部分是由这种体感/运动基础组织的。出生时,当视力可用时,所见的言语就会映射到这个已经建立的基础上。因此,这些相互关联的感知系统提供了一组参数来匹配出生时听到的、看到的和感受到的言语。重要的是,有人认为这些多感官知觉基础是为语言一般知觉建立的:它们建立了一个组织,在口腔运动手势、可见的口腔运动运动和任何语音的听觉感知之间提供冗余。因此,不需要对各个跨模式匹配进行特定学习。那么,我们的论点是,虽然多感官语音感知有一个发展历史(因此并不类似于“先天”起点),但多感官敏感性应该在没有特定语音的具体经验的情况下就位。因此,对于非母语的、以前从未经历过的语音,多感官处理应该像对于母语的、因此熟悉的语音一样明显。为了对照习得性整合的替代假设检验这一假设,将对英语婴儿进行非母语或不熟悉的语音对比测试,并将其与印地语婴儿进行比较,这些对比对印地语婴儿来说是母语的。提出了四组实验,每组实验都使用多模式分布式学习范式。婴儿将在 6 个月大(此时他们仍能辨别非母语语音)和 10 个月大(即他们开始无法辨别)时接受测试。有人提出,如果语音感知是多感官的,没有特定的
根据经验,添加匹配的视觉、触觉或运动信息应有助于在 10 个月大时区分非母语语音对比,而添加不匹配信息应在 6 个月大时干扰区分。如果学习了多感官言语感知,这种模式应该只出现在印地语婴儿身上,对他们来说,对比是熟悉的,因此已经是互感的。具体目标是测试以下方面的影响: 1) 视觉信息对听觉言语感知的影响(实验集 1); 2)口腔运动手势对听觉言语感知的影响(实验集2); 3)口腔运动手势对听觉视觉言语感知的影响(实验组3); 4) 听觉言语感知的触觉信息(实验组 4)。这项工作对于描述典型发育婴儿的言语感知发育特征具有重要的理论意义,并为理解出生时患有感觉或口腔运动障碍的婴儿可能延迟的根源提供了一个框架。最初的多感官言语感知提供的机会和施加的限制使婴儿能够从语言学习环境中快速获取知识,而其中一种贡献模式的缺陷可能会损害最佳的言语和语言发展。
项目成果
期刊论文数量(6)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
INFANTS' USE OF TEMPORAL AND PHONETIC INFORMATION IN THE ENCODING OF AUDIOVISUAL SPEECH.
婴儿在视听语音编码中使用时间和语音信息。
- DOI:
- 发表时间:2016
- 期刊:
- 影响因子:0
- 作者:Danielson,DKyle;Tam,Cassie;Kandhadai,Padmapriya;Werker,JanetF
- 通讯作者:Werker,JanetF
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
JANET F. WERKER其他文献
JANET F. WERKER的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('JANET F. WERKER', 18)}}的其他基金
Multisensory foundations of speech perception in infancy
婴儿期言语感知的多感官基础
- 批准号:
8575658 - 财政年份:2013
- 资助金额:
$ 14.17万 - 项目类别:
相似海外基金
Nonlinear Acoustics for the conditioning monitoring of Aerospace structures (NACMAS)
用于航空航天结构调节监测的非线性声学 (NACMAS)
- 批准号:
10078324 - 财政年份:2023
- 资助金额:
$ 14.17万 - 项目类别:
BEIS-Funded Programmes
ORCC: Marine predator and prey response to climate change: Synthesis of Acoustics, Physiology, Prey, and Habitat In a Rapidly changing Environment (SAPPHIRE)
ORCC:海洋捕食者和猎物对气候变化的反应:快速变化环境中声学、生理学、猎物和栖息地的综合(蓝宝石)
- 批准号:
2308300 - 财政年份:2023
- 资助金额:
$ 14.17万 - 项目类别:
Continuing Grant
University of Salford (The) and KP Acoustics Group Limited KTP 22_23 R1
索尔福德大学 (The) 和 KP Acoustics Group Limited KTP 22_23 R1
- 批准号:
10033989 - 财政年份:2023
- 资助金额:
$ 14.17万 - 项目类别:
Knowledge Transfer Partnership
User-controllable and Physics-informed Neural Acoustics Fields for Multichannel Audio Rendering and Analysis in Mixed Reality Application
用于混合现实应用中多通道音频渲染和分析的用户可控且基于物理的神经声学场
- 批准号:
23K16913 - 财政年份:2023
- 资助金额:
$ 14.17万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
Combined radiation acoustics and ultrasound imaging for real-time guidance in radiotherapy
结合辐射声学和超声成像,用于放射治疗的实时指导
- 批准号:
10582051 - 财政年份:2023
- 资助金额:
$ 14.17万 - 项目类别:
Comprehensive assessment of speech physiology and acoustics in Parkinson's disease progression
帕金森病进展中言语生理学和声学的综合评估
- 批准号:
10602958 - 财政年份:2023
- 资助金额:
$ 14.17万 - 项目类别:
The acoustics of climate change - long-term observations in the arctic oceans
气候变化的声学——北冰洋的长期观测
- 批准号:
2889921 - 财政年份:2023
- 资助金额:
$ 14.17万 - 项目类别:
Studentship
Collaborative Research: Estimating Articulatory Constriction Place and Timing from Speech Acoustics
合作研究:从语音声学估计发音收缩位置和时间
- 批准号:
2343847 - 财政年份:2023
- 资助金额:
$ 14.17万 - 项目类别:
Standard Grant
Collaborative Research: Estimating Articulatory Constriction Place and Timing from Speech Acoustics
合作研究:从语音声学估计发音收缩位置和时间
- 批准号:
2141275 - 财政年份:2022
- 资助金额:
$ 14.17万 - 项目类别:
Standard Grant
Flow Physics and Vortex-Induced Acoustics in Bio-Inspired Collective Locomotion
仿生集体运动中的流动物理学和涡激声学
- 批准号:
DGECR-2022-00019 - 财政年份:2022
- 资助金额:
$ 14.17万 - 项目类别:
Discovery Launch Supplement