Clarity in Motion: A Motion-Tolerant Aid for Selectively Hearing Acoustic Sources
运动清晰度:用于选择性聆听声源的运动耐受辅助设备
基本信息
- 批准号:10603657
- 负责人:
- 金额:$ 27.54万
- 依托单位:
- 依托单位国家:美国
- 项目类别:
- 财政年份:2022
- 资助国家:美国
- 起止时间:2022-12-15 至 2024-11-30
- 项目状态:已结题
- 来源:
- 关键词:AcousticsAlgorithmsArchitectureAuditoryAwarenessBackBinauralCellular PhoneCochlear ImplantsComplexDataDevicesEarEnvironmentEvaluationFrequenciesGovernmentHeadHead MovementsHealthcareHearingHearing AidsHearing problemHeartHumanImageIndividualMeasuresMethodsMotionMovementNoiseOutputPerformancePersonsPhasePositioning AttributePrivatizationProcessReaction TimeReportingResponse LatenciesSignal TransductionSourceSpecific qualifier valueSpeechSpeech IntelligibilitySpeedStreamSystemTechniquesTimeUnited States National Institutes of HealthValidationVariantWorkacoustic imagingbinaural hearingblinddesignhandheld mobile devicehearing impairmentimprovedinnovationinterestmicrophonemotion sensornovelsensorsignal processingsoundsuccesstransmission processwireless
项目摘要
Noisy rooms with multiple moving sound sources create problems for hearing-impaired listeners.
Unwanted masking sounds reduce the intelligibility of speech and other sounds listeners want to hear. “Source
Separation” signal processing methods are known that extract important sources and “scrub” unwanted noise,
but these methods typically require the acoustic sensors (microphones) and sources they process to be fixed
in space—the optimal separation solutions computed by such methods are position dependent. Movement
degrades the quality of separation (QoS) of the computed separation solutions, and reconvergence following a
change of position takes time—often tens of seconds. This constraint limits the practical utility of traditional
separation methods. We propose a novel assistive listening system called CIM (“Clarity in Motion”) which is
capable of maintaining an optimal separation of acoustic sources in real-world environments changing at
“human” speeds. CIM dramatically shortens the time required to reconverge separation solutions. CIM is
designed for integration into NIH’s Open Speech Platform (OSP) initiative for hearing aids and personal audio
devices. CIM leverages STAR’s Multiple Algorithm Source Separation (MASS) application framework of
“pluggable” acoustic separation modules. MASS is compatible with OSP and is publicly available on GitHub.
CIM is room-centric, sensor image-based, and listener-specific. Important system components are
embedded in the room itself, rather than in the user’s ear (e.g. hearing aid). CIM delivers listener-specific audio
to one or more users through their smartphones. CIM employs multiple microphones distributed around a room
and connected to a CIM Room Server (a signal processing device) supporting all listeners. This Server pro-
cesses the audio signals from these shared Room Mics to scrub unwanted sounds from private Listener
Mics, which are typically hearing aid, cochlear implant, or other head-mounted mics specific to each listener.
Each listener uses a CIM mobile device app to register their Listener Mic and specify which acoustic sources to
scrub. The Room Server computes an individualized scrubbed audio stream for each listener and transmits it
wirelessly to their Listener App. The Listener App outputs this stream to the listener’s hearing aid, cochlear
implant, or earbuds as a standard line level or current loop audio signal.
The heart of CIM’s innovation resides in two separate proprietary techniques, described herein, for
reducing the separation solution deconvergence (ΔQ) associated with source or sensor movements.
In Phase I, we will characterize the relationship between ΔQ and relevant objective parameters of
acoustic scenes; implement and quantitatively evaluate the contribution of our novel methods for reducing
motion-induced deconvergence; and carry out a perceptual study of the relationship between movement-
induced solution deconvergence and both listening effort and intelligibility judgements.
The CIM system will help hearing-impaired listeners hear clearly in noisy rooms with moving sources.
具有多个移动声源的嘈杂房间会给听力受损的听众带来问题。
不需要的掩蔽声音会降低语音和听众想要听到的其他声音的清晰度。 “来源
众所周知,“分离”信号处理方法可以提取重要的来源并“清除”不需要的噪声,
但这些方法通常需要固定声学传感器(麦克风)和它们处理的源
在空间中 - 通过此类方法计算的最佳分离解决方案与位置相关。移动
降低计算分离解的分离质量 (QoS),并且在
改变位置需要时间——通常是几十秒。这种限制限制了传统方法的实际应用
分离方法。我们提出了一种称为 CIM(“Clarity in Motion”)的新型辅助听力系统
能够在现实环境中保持声源的最佳分离
“人类”的速度。 CIM 极大地缩短了重新融合分离解决方案所需的时间。 CIM 是
专为集成到 NIH 的助听器和个人音频开放语音平台 (OSP) 计划而设计
设备。 CIM 利用 STAR 的多算法源分离 (MASS) 应用框架
“可插拔”声学分离模块。 MASS 与 OSP 兼容,并在 GitHub 上公开提供。
CIM 以房间为中心、基于传感器图像且特定于听众。重要的系统组件是
嵌入房间本身,而不是用户的耳朵中(例如助听器)。 CIM 提供听众特定的音频
通过智能手机向一名或多名用户发送信息。 CIM 使用分布在房间周围的多个麦克风
并连接到支持所有听众的 CIM Room 服务器(信号处理设备)。该服务器亲
处理来自这些共享房间麦克风的音频信号,以清除私人监听器中不需要的声音
麦克风,通常是助听器、人工耳蜗或其他针对每个听众的头戴式麦克风。
每个听众使用 CIM 移动设备应用程序注册其听众麦克风并指定要使用的声源
擦洗。 Room Server 为每个听众计算并传输个性化的清理音频流
无线连接到他们的 Listener 应用程序。收听者应用程序将此流输出到收听者的助听器,即人工耳蜗
植入物或耳塞作为标准线路电平或电流环路音频信号。
CIM 创新的核心在于两种独立的专有技术,如本文所述,
减少与源或传感器移动相关的分离溶液解聚 (ΔQ)。
在第一阶段,我们将描述 ΔQ 与相关客观参数之间的关系
声学场景;实施并定量评估我们的新方法对减少
运动引起的解聚;并对运动之间的关系进行感性研究
诱导解决方案解收敛以及听力努力和清晰度判断。
CIM 系统将帮助听力受损的听众在具有移动声源的嘈杂房间中清晰地听到声音。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
RICHARD S GOLDHOR其他文献
RICHARD S GOLDHOR的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('RICHARD S GOLDHOR', 18)}}的其他基金
Hear What I Want: an Acoustically Smart Personalized Common Room
听到我想要的:声学智能的个性化公共休息室
- 批准号:
10484661 - 财政年份:2022
- 资助金额:
$ 27.54万 - 项目类别:
SIRCE: A Sensor Image Based Room-Centered Equalization System for Hearing Aids
SIRCE:用于助听器的基于传感器图像的以房间为中心的均衡系统
- 批准号:
9255961 - 财政年份:2016
- 资助金额:
$ 27.54万 - 项目类别:
ACES: A Product to Suppress or Enhance Critical Components in Acoustic Signals
ACES:抑制或增强声学信号中关键成分的产品
- 批准号:
8200823 - 财政年份:2011
- 资助金额:
$ 27.54万 - 项目类别:
DMX: Enabling Blind Source Separation for Hearing Health Care
DMX:实现听力保健盲源分离
- 批准号:
8648615 - 财政年份:2010
- 资助金额:
$ 27.54万 - 项目类别:
DMX: Enabling Blind Source Separation for Hearing Health Care
DMX:实现听力保健盲源分离
- 批准号:
9061938 - 财政年份:2010
- 资助金额:
$ 27.54万 - 项目类别:
SYSTEM FOR CONVERTING SPEECH INTO SYNTHESIS PARAMETERS
将语音转换为合成参数的系统
- 批准号:
3494747 - 财政年份:1991
- 资助金额:
$ 27.54万 - 项目类别:
相似海外基金
CAREER: Efficient Algorithms for Modern Computer Architecture
职业:现代计算机架构的高效算法
- 批准号:
2339310 - 财政年份:2024
- 资助金额:
$ 27.54万 - 项目类别:
Continuing Grant
Collaborative Research: SHF: Small: Artificial Intelligence of Things (AIoT): Theory, Architecture, and Algorithms
合作研究:SHF:小型:物联网人工智能 (AIoT):理论、架构和算法
- 批准号:
2221742 - 财政年份:2022
- 资助金额:
$ 27.54万 - 项目类别:
Standard Grant
Collaborative Research: SHF: Small: Artificial Intelligence of Things (AIoT): Theory, Architecture, and Algorithms
合作研究:SHF:小型:物联网人工智能 (AIoT):理论、架构和算法
- 批准号:
2221741 - 财政年份:2022
- 资助金额:
$ 27.54万 - 项目类别:
Standard Grant
Algorithms and Architecture for Super Terabit Flexible Multicarrier Coherent Optical Transmission
超太比特灵活多载波相干光传输的算法和架构
- 批准号:
533529-2018 - 财政年份:2020
- 资助金额:
$ 27.54万 - 项目类别:
Collaborative Research and Development Grants
OAC Core: Small: Architecture and Network-aware Partitioning Algorithms for Scalable PDE Solvers
OAC 核心:小型:可扩展 PDE 求解器的架构和网络感知分区算法
- 批准号:
2008772 - 财政年份:2020
- 资助金额:
$ 27.54万 - 项目类别:
Standard Grant
Algorithms and Architecture for Super Terabit Flexible Multicarrier Coherent Optical Transmission
超太比特灵活多载波相干光传输的算法和架构
- 批准号:
533529-2018 - 财政年份:2019
- 资助金额:
$ 27.54万 - 项目类别:
Collaborative Research and Development Grants
Visualization of FPGA CAD Algorithms and Target Architecture
FPGA CAD 算法和目标架构的可视化
- 批准号:
541812-2019 - 财政年份:2019
- 资助金额:
$ 27.54万 - 项目类别:
University Undergraduate Student Research Awards
Collaborative Research: ABI Innovation: Algorithms for recovering root architecture from 3D imaging
合作研究:ABI 创新:从 3D 成像恢复根结构的算法
- 批准号:
1759836 - 财政年份:2018
- 资助金额:
$ 27.54万 - 项目类别:
Standard Grant
Collaborative Research: ABI Innovation: Algorithms for recovering root architecture from 3D imaging
合作研究:ABI 创新:从 3D 成像恢复根结构的算法
- 批准号:
1759796 - 财政年份:2018
- 资助金额:
$ 27.54万 - 项目类别:
Standard Grant
Collaborative Research: ABI Innovation: Algorithms for recovering root architecture from 3D imaging
合作研究:ABI 创新:从 3D 成像恢复根结构的算法
- 批准号:
1759807 - 财政年份:2018
- 资助金额:
$ 27.54万 - 项目类别:
Standard Grant














{{item.name}}会员




