Links Between Production and Perception in Speech
言语产生和感知之间的联系
基本信息
- 批准号:7844217
- 负责人:
- 金额:$ 1.19万
- 依托单位:
- 依托单位国家:美国
- 项目类别:
- 财政年份:2009
- 资助国家:美国
- 起止时间:2009-06-01 至 2010-09-30
- 项目状态:已结题
- 来源:
- 关键词:AccountingAcousticsAmericanArticulatorsAutistic DisorderBehavioralBrainCodeCommunicationComputer SimulationDevelopmentDiseaseFaceFeelingFemaleFire - disastersGesturesGoalsGrantHearingHumanIndividualJawJointsLanguageLanguage DevelopmentLanguage DisordersLinguisticsLinkLip structureMapsMeasurementModelingModificationMonkeysNoiseOpticsOutputParkinson DiseasePatternPerceptionPhasePhoneticsProcessProductionResearchResearch PersonnelRoleSchemeShapesSignal TransductionSourceSpecific qualifier valueSpeechSpeech PerceptionStimulusSystemTestingTimeTo specifyUltrasonographyVariantVisualVoiceWorkbasemalemirror neuronresearch studysocialspeech recognitiontheoriestool
项目摘要
Project Summary: Our long-term goal is to understand how humans organize their brains and vocal
tracts so that they can speak; only through understanding normal function can we see what happens with
disorders. Although it is uncontroversial that most of the speech we hear is produced by a human vocal
tract, it is less accepted that speech production and speech perception are intricately linked. Many theorists
hold that the vocal tract's acoustic output is dealt with in a purely acoustic manner and that the link would be
seen in modifications of the vocal tract shape to achieve particular acoustics. An alternative approach holds
that speech consists of gestures (the coordinated activity of articulators), such as the jaw and the lips,
achieving a phonetic goal, such as lip closure. The gestural model has allowed an insightful interpretation of
many speech production phenomena, and the models have begun to have testable predictions for
perceptual theories as well. The proposed experiments expand on this research, showing how perception of
gestures is possible in automatic speech recognition, how the consequences of articulation¿acoustic, visual
and even haptic¿are used by perceivers, and how accommodations are made for differences between
speakers. This theoretical outlook has been fruitfully applied to problems in language acquisition, language
change, and certain language disabilities. The advances from the proposed research should allow even
broader applications. The goal is to show how acoustic parameters that cohere because of their origin in
articulation are used by listeners. This will be accomplished by acoustical modeling of natural productions,
perception of natural speech under modified circumstances (e.g., impaired by noise or enhanced by feeling
the articulators saying what is being heard), and measurement of speech with ultrasound and optical
markers. These measurements provide a basis for input to our configurable articulatory synthesizer, which
can match the size and acoustic output of individual speakers. Stimuli generated from this synthesizer can
test hypotheses about what is important in the production patterns we see. The results of these experiments
will show more clearly than ever the tight link between production and perception of speech.
Relevance: Speech is the primary means most humans use to communicate and maintain social
relationships, but it is vulnerable to a range of disorders. We have to understand how it is that speech works
normally so we can know what to do when things go wrong. Research along the lines in the present project
has already contributed to other grants dealing with such disorders as Parkinson's disease and autism.
项目摘要:我们的长期目标是了解人类如何组织大脑和声音
传单以便他们能够说话;只有通过理解正常功能我们才能看到发生了什么
失调。尽管毫无争议,我们听到的大部分语音都是由人声发出的
尽管人们普遍认为,言语产生和言语感知之间存在着错综复杂的联系,这一点却不太被接受。许多理论家
认为声道的声音输出是以纯声学方式处理的,并且该链接将是
可以通过改变声道形状来实现特定的声学效果。另一种方法成立
言语由手势(发音器的协调活动)组成,例如下巴和嘴唇,
实现语音目标,例如嘴唇闭合。手势模型可以对
许多语音产生现象,并且模型已经开始对以下内容进行可测试的预测
知觉理论也是如此。拟议的实验扩展了这项研究,展示了感知如何
手势在自动语音识别中是可能的,发音的后果如何?声学、视觉
甚至触觉——被感知者使用,以及如何适应之间的差异
扬声器。这一理论观点已卓有成效地应用于语言习得、语言
变化和某些语言障碍。拟议研究的进展甚至应该允许
更广泛的应用。目标是展示声学参数如何因其起源而保持一致
发音是由听众使用的。这将通过自然产物的声学建模来完成,
在改变的环境下对自然语音的感知(例如,被噪音削弱或被感觉增强)
发音器说出所听到的内容),并使用超声波和光学测量语音
标记。这些测量结果为我们的可配置发音合成器的输入提供了基础,
可以匹配各个扬声器的尺寸和声音输出。该合成器产生的刺激可以
测试关于我们所看到的生产模式中什么是重要的假设。这些实验的结果
将比以往任何时候都更清楚地显示言语的产生和感知之间的紧密联系。
相关性:语音是大多数人用来沟通和维持社交的主要方式
关系,但它很容易受到一系列疾病的影响。我们必须了解言语是如何运作的
通常这样我们就可以知道出现问题时该怎么做。沿着当前项目的思路进行研究
已经为治疗帕金森病和自闭症等疾病的其他拨款做出了贡献。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Douglas H Whalen其他文献
Douglas H Whalen的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Douglas H Whalen', 18)}}的其他基金
Making DeepEdge, a tool for ultrasound analysis, cloud-accessible.
使超声分析工具 DeepEdge 可通过云访问。
- 批准号:
10406387 - 财政年份:1996
- 资助金额:
$ 1.19万 - 项目类别:
LINKS BETWEEN PRODUCTION AND PERCEPTION IN SPEECH
言语产生和感知之间的联系
- 批准号:
6198425 - 财政年份:1996
- 资助金额:
$ 1.19万 - 项目类别:
Links Between Production and Perception in Speech
言语产生和感知之间的联系
- 批准号:
7387332 - 财政年份:1996
- 资助金额:
$ 1.19万 - 项目类别:
LINKS BETWEEN PRODUCTION AND PERCEPTION IN SPEECH
言语产生和感知之间的联系
- 批准号:
6634466 - 财政年份:1996
- 资助金额:
$ 1.19万 - 项目类别:
Links Between Production and Perception in Speech
言语产生和感知之间的联系
- 批准号:
8523824 - 财政年份:1996
- 资助金额:
$ 1.19万 - 项目类别:
相似海外基金
Nonlinear Acoustics for the conditioning monitoring of Aerospace structures (NACMAS)
用于航空航天结构调节监测的非线性声学 (NACMAS)
- 批准号:
10078324 - 财政年份:2023
- 资助金额:
$ 1.19万 - 项目类别:
BEIS-Funded Programmes
ORCC: Marine predator and prey response to climate change: Synthesis of Acoustics, Physiology, Prey, and Habitat In a Rapidly changing Environment (SAPPHIRE)
ORCC:海洋捕食者和猎物对气候变化的反应:快速变化环境中声学、生理学、猎物和栖息地的综合(蓝宝石)
- 批准号:
2308300 - 财政年份:2023
- 资助金额:
$ 1.19万 - 项目类别:
Continuing Grant
University of Salford (The) and KP Acoustics Group Limited KTP 22_23 R1
索尔福德大学 (The) 和 KP Acoustics Group Limited KTP 22_23 R1
- 批准号:
10033989 - 财政年份:2023
- 资助金额:
$ 1.19万 - 项目类别:
Knowledge Transfer Partnership
User-controllable and Physics-informed Neural Acoustics Fields for Multichannel Audio Rendering and Analysis in Mixed Reality Application
用于混合现实应用中多通道音频渲染和分析的用户可控且基于物理的神经声学场
- 批准号:
23K16913 - 财政年份:2023
- 资助金额:
$ 1.19万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
Combined radiation acoustics and ultrasound imaging for real-time guidance in radiotherapy
结合辐射声学和超声成像,用于放射治疗的实时指导
- 批准号:
10582051 - 财政年份:2023
- 资助金额:
$ 1.19万 - 项目类别:
Comprehensive assessment of speech physiology and acoustics in Parkinson's disease progression
帕金森病进展中言语生理学和声学的综合评估
- 批准号:
10602958 - 财政年份:2023
- 资助金额:
$ 1.19万 - 项目类别:
The acoustics of climate change - long-term observations in the arctic oceans
气候变化的声学——北冰洋的长期观测
- 批准号:
2889921 - 财政年份:2023
- 资助金额:
$ 1.19万 - 项目类别:
Studentship
Collaborative Research: Estimating Articulatory Constriction Place and Timing from Speech Acoustics
合作研究:从语音声学估计发音收缩位置和时间
- 批准号:
2343847 - 财政年份:2023
- 资助金额:
$ 1.19万 - 项目类别:
Standard Grant
Collaborative Research: Estimating Articulatory Constriction Place and Timing from Speech Acoustics
合作研究:从语音声学估计发音收缩位置和时间
- 批准号:
2141275 - 财政年份:2022
- 资助金额:
$ 1.19万 - 项目类别:
Standard Grant
Flow Physics and Vortex-Induced Acoustics in Bio-Inspired Collective Locomotion
仿生集体运动中的流动物理学和涡激声学
- 批准号:
DGECR-2022-00019 - 财政年份:2022
- 资助金额:
$ 1.19万 - 项目类别:
Discovery Launch Supplement














{{item.name}}会员




