Expanding articulatory information from ultrasound imaging of speech using MRI-based image simulations and audio measurements
使用基于 MRI 的图像模拟和音频测量来扩展语音超声成像的发音信息
基本信息
- 批准号:10537976
- 负责人:
- 金额:$ 3.97万
- 依托单位:
- 依托单位国家:美国
- 项目类别:
- 财政年份:2022
- 资助国家:美国
- 起止时间:2022-08-01 至 2024-07-31
- 项目状态:已结题
- 来源:
- 关键词:3-DimensionalAcousticsAddressAdultAffectAgeAirAreaArticulationBiofeedback TrainingBlack raceCharacteristicsChildClientClinicalComputer softwareDataData CollectionData SetDiseaseEducationEmploymentFeedbackFellowshipFutureGeometryGoalsGuidelinesHandHealthImageImage AnalysisIndividualInstitutionInterpersonal RelationsInterventionKnowledgeLanguageLanguage DevelopmentLearningMagnetic Resonance ImagingMeasurementMethodsModelingMorphologic artifactsMovementNeural Network SimulationObstructive Sleep ApneaOryctolagus cuniculusOutcomePalatePathologistProductionRecommendationResearchResearch PersonnelRotationSchoolsShapesSourceSpeechSpeech DisordersSpeech SoundSpeech TherapyStructureStutteringSurfaceTechniquesTechnologyTestingTimeTissue ModelTissue imagingTissuesTongueTooth structureTrainingTranslational ResearchUltrasonic waveUltrasonographyUnited StatesUniversitiesVariantVisitVisualWorkbasedeep learningdeep learning modelimaging Segmentationimaging systemimprovedinsightmachine learning modelneural networknovelnovel strategiespredictive modelingpreventsimulationskillssocialsoundspeech accuracysuccesstissue mappingtongue apextoolultrasound
项目摘要
PROJECT SUMMARY
Ultrasound imaging provides articulatory feedback useful for remediating speech sound disorders,
which affect 5% of children and cause long-term deficits in social health and employment in adulthood.
However, ultrasound imaging can be difficult to interpret for clinicians and individuals, limiting the
understanding of articulatory data and ultrasound biofeedback therapy speech outcomes. A likely source of
difficulty is the articulatory information missing from ultrasound images, such as the tongue tip and reference
vocal tract structures (e.g., palate) that cannot be consistently imaged with ultrasound due to air.
Much of this missing information from ultrasound can be ascertained in magnetic resonance imaging
(MRI) because MRI images the entire vocal tract. Comparing ultrasound images and MRI will improve
interpretation of ultrasound images by confirming that certain characteristics of ultrasound images (e.g.,
obscured tongue tip, double edge artifacts) occur from characteristics of tongue shapes; as well, models can
be trained to predict from ultrasound images the articulatory information shown in MRI. However, articulatory
variability prevents direct comparison between these images. A novel approach to avoid variability is to
simulate ultrasound wave propagation in tissue segmented from MRI. Recent advancements in deep learning
have also demonstrated ability to address the inverse problem of predicting articulation from acoustic data.
Thus, to meet the needs of improving ultrasound image interpretation, the goal for this proposal is to use
simulated ultrasound images and neural network models to characterize and predict articulatory information
missing from 2D midsagittal ultrasound images. These models will be trained on MRI and audio data.
We will characterize missing articulatory information by developing efficient simulation of ultrasound
images from MRI tissue segmentation. One hypothesis that will be tested is the guideline for using the lower
edge of double edge artifacts in ultrasound images as the tongue surface. To test this guideline for a greater
range of data (including disordered child speakers and different simulated probe rotations), double edge
artifacts will be compared with tissue maps used to generate the simulated images. Another comparison will
estimate the amount of tongue tip typically missing in /r/ tongue shapes. We will then develop a deep learning
model that trains on information from MRI to predict midsagittal vocal tract shapes (including the tongue tip and
palate) from the inputs of tongue contours from ultrasound and audio. With these aims, we will add insight to
ultrasound imaging for speech and provide a tool with future applications in expanding articulatory information,
e.g., testing outcomes of using more complete vocal tract information in ultrasound biofeedback therapy.
Training for this fellowship will occur at the University of Cincinnati, with opportunities to visit labs at two
additional institutions. The proposed plan provides training from a range of investigators in topics such as
ultrasound imaging and application to speech research, developing skills needed for my future goals.
项目概要
超声成像提供可用于治疗言语障碍的发音反馈,
影响 5% 的儿童,并导致成年后社会健康和就业的长期缺陷。
然而,超声成像对于临床医生和个人来说可能难以解释,限制了
了解发音数据和超声生物反馈治疗的言语结果。一个可能的来源
困难在于超声图像中缺少发音信息,例如舌尖和参考
由于空气的原因,无法用超声波一致地对声道结构(例如上颚)进行成像。
超声中丢失的大部分信息可以通过磁共振成像来确定
(MRI) 因为 MRI 可对整个声道进行成像。比较超声图像和 MRI 将改善
通过确认超声图像的某些特征(例如,
舌尖模糊、双边伪影)是由于舌形特征而产生的;同样,模型可以
接受训练以根据超声图像预测 MRI 中显示的关节信息。然而,关节
可变性阻碍了这些图像之间的直接比较。避免变异的一种新方法是
模拟从 MRI 分割的组织中的超声波传播。深度学习的最新进展
还展示了解决从声学数据预测发音的逆问题的能力。
因此,为了满足改进超声图像解释的需求,本提案的目标是使用
模拟超声图像和神经网络模型来表征和预测发音信息
二维正中矢状超声图像中缺失。这些模型将接受 MRI 和音频数据的训练。
我们将通过开发高效的超声模拟来表征缺失的发音信息
来自 MRI 组织分割的图像。将要检验的一项假设是使用较低值的指南
超声图像中双边缘伪影的边缘作为舌面。为了更好地测试本指南
数据范围(包括无序的儿童说话者和不同的模拟探头旋转),双边缘
伪影将与用于生成模拟图像的组织图进行比较。另一种比较将
估计 /r/ 舌头形状中通常缺失的舌尖数量。然后我们将开发深度学习
该模型利用 MRI 信息进行训练,预测中矢状声道形状(包括舌尖和舌尖)
上颚)来自超声波和音频的舌头轮廓输入。为了实现这些目标,我们将增加洞察力
用于语音的超声成像并为扩展发音信息的未来应用提供了工具,
例如,在超声生物反馈治疗中使用更完整的声道信息来测试结果。
该奖学金的培训将在辛辛那提大学进行,有机会在下午两点参观实验室
额外的机构。拟议的计划为一系列研究人员提供了以下主题的培训:
超声成像及其在语音研究中的应用,培养我未来目标所需的技能。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Sarah Rotong Li其他文献
Sarah Rotong Li的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
相似海外基金
Nonlinear Acoustics for the conditioning monitoring of Aerospace structures (NACMAS)
用于航空航天结构调节监测的非线性声学 (NACMAS)
- 批准号:
10078324 - 财政年份:2023
- 资助金额:
$ 3.97万 - 项目类别:
BEIS-Funded Programmes
ORCC: Marine predator and prey response to climate change: Synthesis of Acoustics, Physiology, Prey, and Habitat In a Rapidly changing Environment (SAPPHIRE)
ORCC:海洋捕食者和猎物对气候变化的反应:快速变化环境中声学、生理学、猎物和栖息地的综合(蓝宝石)
- 批准号:
2308300 - 财政年份:2023
- 资助金额:
$ 3.97万 - 项目类别:
Continuing Grant
University of Salford (The) and KP Acoustics Group Limited KTP 22_23 R1
索尔福德大学 (The) 和 KP Acoustics Group Limited KTP 22_23 R1
- 批准号:
10033989 - 财政年份:2023
- 资助金额:
$ 3.97万 - 项目类别:
Knowledge Transfer Partnership
User-controllable and Physics-informed Neural Acoustics Fields for Multichannel Audio Rendering and Analysis in Mixed Reality Application
用于混合现实应用中多通道音频渲染和分析的用户可控且基于物理的神经声学场
- 批准号:
23K16913 - 财政年份:2023
- 资助金额:
$ 3.97万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
Combined radiation acoustics and ultrasound imaging for real-time guidance in radiotherapy
结合辐射声学和超声成像,用于放射治疗的实时指导
- 批准号:
10582051 - 财政年份:2023
- 资助金额:
$ 3.97万 - 项目类别:
Comprehensive assessment of speech physiology and acoustics in Parkinson's disease progression
帕金森病进展中言语生理学和声学的综合评估
- 批准号:
10602958 - 财政年份:2023
- 资助金额:
$ 3.97万 - 项目类别:
The acoustics of climate change - long-term observations in the arctic oceans
气候变化的声学——北冰洋的长期观测
- 批准号:
2889921 - 财政年份:2023
- 资助金额:
$ 3.97万 - 项目类别:
Studentship
Collaborative Research: Estimating Articulatory Constriction Place and Timing from Speech Acoustics
合作研究:从语音声学估计发音收缩位置和时间
- 批准号:
2343847 - 财政年份:2023
- 资助金额:
$ 3.97万 - 项目类别:
Standard Grant
Flow Physics and Vortex-Induced Acoustics in Bio-Inspired Collective Locomotion
仿生集体运动中的流动物理学和涡激声学
- 批准号:
DGECR-2022-00019 - 财政年份:2022
- 资助金额:
$ 3.97万 - 项目类别:
Discovery Launch Supplement
Collaborative Research: Estimating Articulatory Constriction Place and Timing from Speech Acoustics
合作研究:从语音声学估计发音收缩位置和时间
- 批准号:
2141275 - 财政年份:2022
- 资助金额:
$ 3.97万 - 项目类别:
Standard Grant














{{item.name}}会员




