Context:sensitivity/bias/parsing phonetic information

上下文:敏感性/偏差/解析语音信息

基本信息

  • 批准号:
    6821834
  • 负责人:
  • 金额:
    $ 27.51万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
  • 财政年份:
    2004
  • 资助国家:
    美国
  • 起止时间:
    2004-07-01 至 2007-06-30
  • 项目状态:
    已结题

项目摘要

DESCRIPTION: (provided by applicant): In fluent speech, speakers begin to pronounce the next sound before they're done pronouncing the last. As a result, speech sounds not only occur right next to one another but actually overlap, and pauses only occur between whole phrases and not individual sounds. This characteristic of fluent speech presents the listener with two formidable problems: separating overlapping sounds and then recognizing sounds whose acoustics have been distorted by the overlap with its neighbors' pronunciations. This proposal pursues the hypothesis that both separation and recognition can happen because successive intervals in the signal contrast with one another perceptually. For example, after an interval in which most of the sound energy is at high frequencies, a sound whose energy is at mid frequencies will sound relatively low, or after a relatively long interval, an interval of intermediate duration will sound relatively short. The experiments test a version of this hypothesis in which sequential contrast is exaggerated like this in the initial auditory evaluation of the sounds, before the listener has assigned any linguistic value to the sound, i.e. before the sounds are recognized as instances of particular categories. If sequential contrast arises before the sounds are recognized, then it will be impervious to any linguistic knowledge the listener may have, e.g. of whether the current sound makes a word with its context, occurs frequently in that context, is phonotactically legal in that context, etc. A separate, prelinguistic, auditory stage of phonetic processing is diagnosed by better discrimination of sound sequences that differ in the direction of their sequential contrast, e.g. high-low vs low-high, than of sequences that don't, i.e. high-high vs low-low. If linguistic knowledge is used at all stages of processing, these two pairs of sequences should instead be equally easy to distinguish because all the intervals will have been assigned to categories and will therefore be equally different. The results of these experiments therefore permit a choice between interactive models of speech sound recognition in which listeners use their linguistic knowledge at all stages in processing the speech sounds they hear and autonomous models in which they use only the psychoacoustic properties of the signal during the first stage, and only later apply what they know linguistically to the output of that stage. If the autonomous model is supported, then the robustness of speech perception under adverse conditions or by impaired listeners can be improved more by enhancing signal quality than adding redundant linguistic information.
描述:(由申请人提供):在流利的讲话,扬声器开始发下一个音之前,他们完成了最后一个发音。因此,语音不仅紧挨着出现,而且实际上是重叠的,停顿只发生在整个短语之间,而不是单个声音之间。流利语音的这一特点给听者带来了两个棘手的问题:分离重叠的声音,然后识别那些由于与相邻发音重叠而失真的声音。这个建议追求的假设,即分离和识别都可以发生,因为连续的间隔在信号中相互对比的感知。例如,在大部分声能处于高频的间隔之后,能量处于中频的声音将听起来相对较低,或者在相对较长的间隔之后,中间持续时间的间隔将听起来相对较短。实验测试了这一假设的一个版本,其中顺序对比在对声音的初始听觉评估中被夸大,在听者为声音分配任何语言值之前,即在声音被识别为特定类别的实例之前。如果顺序对比在声音被识别之前出现,那么它将不受收听者可能具有的任何语言知识的影响,例如当前声音是否与其上下文一起构成单词,是否在该上下文中频繁出现,是否在该上下文中在音位结构上是法律的,等等。语音处理的听觉阶段通过更好地区分在其顺序对比方向上不同的声音序列来诊断,例如高-低对比低-高,而不是不这样做的序列,即高-高与低-低。如果在加工的所有阶段都使用语言知识,这两对序列应该同样容易区分,因为所有的间隔都被分配给了类别,因此也同样不同。因此,这些实验的结果允许在交互式语音识别模型和自主模型之间进行选择,在交互式语音识别模型中,听众在处理他们听到的语音的所有阶段都使用他们的语言知识,在自主模型中,他们在第一阶段期间仅使用信号的心理声学特性,并且仅在稍后将他们所知道的语言应用于该阶段的输出。如果自治模型的支持,那么语音感知的鲁棒性在不利的条件下或受损的听众可以提高更多的信号质量比添加冗余的语言信息。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

JOHN C KINGSTON其他文献

JOHN C KINGSTON的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('JOHN C KINGSTON', 18)}}的其他基金

Context:sensitivity/bias/parsing phonetic information
上下文:敏感性/偏差/解析语音信息
  • 批准号:
    6912759
  • 财政年份:
    2004
  • 资助金额:
    $ 27.51万
  • 项目类别:
Context:sensitivity/bias/parsing phonetic information
上下文:敏感性/偏差/解析语音信息
  • 批准号:
    7086399
  • 财政年份:
    2004
  • 资助金额:
    $ 27.51万
  • 项目类别:
INTEGRATION OF ARTICULATIONS IN SPEECH
语音中发音的整合
  • 批准号:
    3461900
  • 财政年份:
    1993
  • 资助金额:
    $ 27.51万
  • 项目类别:
INTEGRATION OF ARTICULATIONS IN SPEECH
语音中发音的整合
  • 批准号:
    2443607
  • 财政年份:
    1993
  • 资助金额:
    $ 27.51万
  • 项目类别:
INTEGRATION OF ARTICULATIONS IN SPEECH
语音中发音的整合
  • 批准号:
    2126738
  • 财政年份:
    1993
  • 资助金额:
    $ 27.51万
  • 项目类别:
INTEGRATION OF ARTICULATIONS IN SPEECH
语音中发音的整合
  • 批准号:
    2126739
  • 财政年份:
    1993
  • 资助金额:
    $ 27.51万
  • 项目类别:
INTEGRATION OF ARTICULATIONS IN SPEECH
语音中发音的整合
  • 批准号:
    2126740
  • 财政年份:
    1993
  • 资助金额:
    $ 27.51万
  • 项目类别:
TRAINING IN PSYCHOLINGUISTICS
心理语言学培训
  • 批准号:
    2195417
  • 财政年份:
    1987
  • 资助金额:
    $ 27.51万
  • 项目类别:
TRAINING IN PSYCHOLINGUISTICS
心理语言学培训
  • 批准号:
    6636760
  • 财政年份:
    1987
  • 资助金额:
    $ 27.51万
  • 项目类别:
TRAINING IN PSYCHOLINGUISTICS
心理语言学培训
  • 批准号:
    2195416
  • 财政年份:
    1987
  • 资助金额:
    $ 27.51万
  • 项目类别:

相似海外基金

Study on construction of P300 based brain-computer interface (BCI) by selective attention of auditory stimulus sound
基于P300的听觉刺激声音选择性注意构建脑机接口(BCI)的研究
  • 批准号:
    20H04563
  • 财政年份:
    2020
  • 资助金额:
    $ 27.51万
  • 项目类别:
    Grant-in-Aid for Scientific Research (B)
Effects of attentional modification on prepusle inhibition of the auditory stimulus.
注意修饰对听觉刺激前脉冲抑制的影响。
  • 批准号:
    17K04508
  • 财政年份:
    2017
  • 资助金额:
    $ 27.51万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
Detection of deception with event-related potentials using simultaneous visual and auditory stimulus presentation method
使用同时视觉和听觉刺激呈现方法检测事件相关电位的欺骗
  • 批准号:
    26380973
  • 财政年份:
    2014
  • 资助金额:
    $ 27.51万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
Neural Mechanisms of Visual and Auditory Stimulus Selection
视觉和听觉刺激选择的神经机制
  • 批准号:
    8704650
  • 财政年份:
    2013
  • 资助金额:
    $ 27.51万
  • 项目类别:
AUDITORY STIMULUS FREQUENCY EFFECT ON HUMAN BRAINSTEM AUDITORY RESPONSE
听觉刺激频率对人脑干听觉反应的影响
  • 批准号:
    7011638
  • 财政年份:
    2004
  • 资助金额:
    $ 27.51万
  • 项目类别:
Communication Aid Based on Event Related Brain Potentials for Auditory Stimulus
基于事件相关大脑听觉刺激潜力的沟通辅助
  • 批准号:
    12832027
  • 财政年份:
    2000
  • 资助金额:
    $ 27.51万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
Auditory stimulus generation, delivery, and measurement system
听觉刺激生成、传递和测量系统
  • 批准号:
    121437-1992
  • 财政年份:
    1992
  • 资助金额:
    $ 27.51万
  • 项目类别:
    Research Tools and Instruments - Category 1 (<$150,000)
AUDITORY STIMULUS CODING IN NOISE-DAMAGED EARS
噪声损伤耳朵中的听觉刺激编码
  • 批准号:
    3035455
  • 财政年份:
    1991
  • 资助金额:
    $ 27.51万
  • 项目类别:
Neural Mechanisms of Visual and Auditory Stimulus Selection
视觉和听觉刺激选择的神经机制
  • 批准号:
    7685418
  • 财政年份:
    1980
  • 资助金额:
    $ 27.51万
  • 项目类别:
Neural Mechanisms of Visual and Auditory Stimulus Selection
视觉和听觉刺激选择的神经机制
  • 批准号:
    7917313
  • 财政年份:
    1980
  • 资助金额:
    $ 27.51万
  • 项目类别:
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了