Characterizing the recovery of spectral, temporal, and phonemic speech information from visual cues

表征从视觉线索中恢复频谱、时间和音位语音信息

基本信息

  • 批准号:
    10563860
  • 负责人:
  • 金额:
    $ 55.04万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
  • 财政年份:
    2023
  • 资助国家:
    美国
  • 起止时间:
    2023-02-14 至 2028-01-31
  • 项目状态:
    未结题

项目摘要

Project Summary Auditory speech perception is essential for social, vocational, and emotional health in hearing individuals. However, the reliability of auditory signals varies widely in everyday settings (e.g., at a crowded party), requiring supplemental processes to enable accurate speech perception. The principle mechanisms that support the perception of degraded auditory speech signals are auditory-visual (crossmodal) interactions, which can perceptually restore speech content using visual cues provided by lipreading, rhythmic articulatory movements, and the natural correlations present between oral resonance and mouth shape. Moreover, receptive speech processes can be limited through a variety of causes, including intrinsic brain tumor, stroke, cochlear implant usage, and age-related hearing loss, making compensatory crossmodal mechanisms necessary for one to continue working and maintaining healthy social interactions. However, the physiological processes that enable vision to facilitate speech perception remain poorly understood and no integrative model exists for how these multiple visual dimensions combine to enhance auditory speech perception. In the auditory domain, distributed populations of neurons encode spectro-temporal information about acoustic cues that are then transcoded into phonemes. We propose a dual-route perceptual model through which visual signals integrate with phoneme- coded neurons. First, a direct path through which viseme-to-phoneme conversions generate semi-overlapping distributions of activity in the superior temporal gyrus, leading to improved hearing through improved auditory phoneme tuning functions. Second, an indirect path through which visual features restore spectral information about speech frequencies and alter phoneme-response timing, resulting in improved auditory spectro-temporal profiles (which in turn are transcoded into phonemes with greater precision). Finally, we will examine the hypothesis that our perceptual system optimizes which of these visual dimensions is prioritized for recovery based on what is missing from the auditory signal. These studies will provide a unified framework for how speech perception benefits from different visual signals. By understanding biological approaches to crossmodally restoring degraded auditory speech information, we can develop better targeted rehabilitation programs and neural prostheses to maximize speech perception recovery after trauma or during healthy aging.
项目摘要 听觉言语知觉对于听力正常的个体的社会、职业和情感健康至关重要。 然而,听觉信号的可靠性在日常设置中变化很大(例如,在拥挤的聚会上),要求 补充过程,以实现准确的语音感知。支持《公约》的主要机制 对退化听觉语音信号的感知是听觉-视觉(跨模态)交互, 使用由唇读,有节奏的发音运动, 以及口腔共鸣和嘴型之间的自然相关性。此外,接受性言语 过程可能受到各种原因的限制,包括内在脑肿瘤,中风,人工耳蜗植入 使用,和年龄相关的听力损失,使补偿跨模态机制必要的一个, 继续工作,保持健康的社会交往。然而,生理过程,使 视觉促进言语感知仍然知之甚少,也没有综合模型来解释这些 多个视觉维度联合收割机组合以增强听觉语音感知。在听觉领域, 神经元群体编码关于声学线索的频谱-时间信息,然后将其转码为 音素我们提出了一个双路感知模型,通过该模型,视觉信号与音素整合, 编码神经元。第一,视位到音位转换产生半重叠的直接路径 活动分布在上级颞回,导致通过改善听觉改善听力 音素调谐功能。第二,视觉特征还原光谱信息的间接路径 关于语音频率和改变音素反应时间,从而改善听觉频谱-时间 简档(简档又以更高的精度被转码为音素)。最后,我们将研究 假设我们的感知系统优化了这些视觉维度中的哪一个优先恢复 基于听觉信号中缺失的部分。这些研究将提供一个统一的框架, 感知受益于不同的视觉信号。通过理解生物学方法, 恢复退化的听觉语音信息,我们可以制定更有针对性的康复计划, 神经假体,以最大限度地恢复语音感知创伤后或在健康的老龄化。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

David Brang其他文献

David Brang的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('David Brang', 18)}}的其他基金

Networks Underlying Visual Modulation of Speech Perception
语音感知视觉调制的网络基础
  • 批准号:
    9337601
  • 财政年份:
    2016
  • 资助金额:
    $ 55.04万
  • 项目类别:
Networks Underlying Visual Modulation of Speech Perception
语音感知视觉调制的网络
  • 批准号:
    9353752
  • 财政年份:
    2016
  • 资助金额:
    $ 55.04万
  • 项目类别:
Networks underlying visual modulation of speech perception
语音感知视觉调制的网络
  • 批准号:
    8959922
  • 财政年份:
    2014
  • 资助金额:
    $ 55.04万
  • 项目类别:

相似国自然基金

多模态超声VisTran-Attention网络评估早期子宫颈癌保留生育功能手术可行性
  • 批准号:
  • 批准年份:
    2022
  • 资助金额:
    30 万元
  • 项目类别:
    青年科学基金项目
Ultrasomics-Attention孪生网络早期精准评估肝内胆管癌免疫治疗的研究
  • 批准号:
  • 批准年份:
    2022
  • 资助金额:
    52 万元
  • 项目类别:
    面上项目

相似海外基金

22 UKRI-SBE: Contextually and probabilistically weighted auditory selective attention: from neurons to networks
22 UKRI-SBE:上下文和概率加权听觉选择性注意:从神经元到网络
  • 批准号:
    BB/X013103/1
  • 财政年份:
    2023
  • 资助金额:
    $ 55.04万
  • 项目类别:
    Research Grant
Mechanisms of auditory selective attention for speech and non-speech stimuli
对言语和非言语刺激的听觉选择性注意机制
  • 批准号:
    10535232
  • 财政年份:
    2023
  • 资助金额:
    $ 55.04万
  • 项目类别:
SBE-UKRI: Contextually and probabilistically weighted auditory selective attention: from neurons to networks
SBE-UKRI:上下文和概率加权听觉选择性注意:从神经元到网络
  • 批准号:
    2414066
  • 财政年份:
    2023
  • 资助金额:
    $ 55.04万
  • 项目类别:
    Standard Grant
Development of test method and hearing aid technology focusing on attention function of patients with auditory processing disorder
专注于听觉处理障碍患者注意力功能的测试方法及助听器技术开发
  • 批准号:
    23K17600
  • 财政年份:
    2023
  • 资助金额:
    $ 55.04万
  • 项目类别:
    Grant-in-Aid for Challenging Research (Exploratory)
Brain Electrical Dynamics for Top-Down Auditory Attention
自上而下听觉注意力的脑电动力学
  • 批准号:
    RGPIN-2019-05659
  • 财政年份:
    2022
  • 资助金额:
    $ 55.04万
  • 项目类别:
    Discovery Grants Program - Individual
Parametrization and validation of the N-SEEV Attention Model for Visual and Auditory scenes
视觉和听觉场景的 N-SEEV 注意力模型的参数化和验证
  • 批准号:
    RGPIN-2022-04852
  • 财政年份:
    2022
  • 资助金额:
    $ 55.04万
  • 项目类别:
    Discovery Grants Program - Individual
Nanomaterials Based Dry Electroencephalography Electrodes for Auditory Attention Decoding in Hearing Assistance Devices
基于纳米材料的干式脑电图电极,用于助听设备中的听觉注意力解码
  • 批准号:
    570743-2021
  • 财政年份:
    2022
  • 资助金额:
    $ 55.04万
  • 项目类别:
    Alliance Grants
Attention and Auditory Scene Analysis
注意力和听觉场景分析
  • 批准号:
    RGPIN-2021-02721
  • 财政年份:
    2022
  • 资助金额:
    $ 55.04万
  • 项目类别:
    Discovery Grants Program - Individual
SBE-UKRI: Contextually and probabilistically weighted auditory selective attention: from neurons to networks
SBE-UKRI:上下文和概率加权听觉选择性注意:从神经元到网络
  • 批准号:
    2219521
  • 财政年份:
    2022
  • 资助金额:
    $ 55.04万
  • 项目类别:
    Standard Grant
Excellence in Research: Incorporating Attention into Computational Auditory Scene Analysis Using Spectral Clustering with Focal Templates
卓越研究:使用带有焦点模板的谱聚类将注意力纳入计算听觉场景分析
  • 批准号:
    2100874
  • 财政年份:
    2021
  • 资助金额:
    $ 55.04万
  • 项目类别:
    Standard Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了