Objectively Quantifying Speech Outcomes of Children with Cleft Palate
客观量化腭裂儿童的言语结果
基本信息
- 批准号:9765280
- 负责人:
- 金额:$ 22.55万
- 依托单位:
- 依托单位国家:美国
- 项目类别:
- 财政年份:2018
- 资助国家:美国
- 起止时间:2018-08-16 至 2021-12-31
- 项目状态:已结题
- 来源:
- 关键词:7 year oldAcousticsAgeAlgorithmsAmericanArticulationArtificial IntelligenceBehavior TherapyChildCleft PalateCleft lip with or without cleft palateClinicClinicalClinical ResearchCommunitiesDataDatabasesDevelopmentDimensionsEarEnsureEnvironmentEvaluationExhibitsFour-dimensionalFrequenciesFundingGoldHumanIndividualInternationalInterventionJudgmentLanguageLearningMapsMeasuresMethodsModelingNational Institute of Dental and Craniofacial ResearchNoseOperative Surgical ProceduresOutcomeOutcomes ResearchOutputPathologistPerceptionPerformancePopulationPositioning AttributeProductionProtocols documentationProxyReference ValuesReportingResearchSamplingSeriesSeveritiesSignal TransductionSpeechSpeech AcousticsSpeech DisordersTechnologyTimeTrainingUnited States National Institutes of HealthUtahValidationValidity and ReliabilityVisitWorkbasecleft lip and palateclinically relevantcraniofacialimpressionimproved outcomelearning algorithmmobile applicationnovelpredictive modelingsignal processingsuccesstool
项目摘要
Perceptual assessment of hypernasality is considered a critical component when evaluating the speech of
children with cleft lip and/or palate (CLP). However, most speech-language pathologists (SLPs) do not receive
formal training for perceptual evaluation of speech and, as a result, research shows that the subjective ratings
are inherently biased to the perceiver and exhibit considerable variability. In this project, we aim to develop an
artificial intelligence (AI) algorithm that automatically evaluates speech along four dimensions deemed to be
critically important by the Americleft Speech Outcomes Group (ASOG), namely speech acceptability,
articulation, hypernasality, and audible nasal emissions. The AI algorithm in this project is based on an existing
database of speech collected as a part of an NIH-funded project to develop reliable speech outcomes by
improving the reliability of perceptual ratings by training clinicians (NIDCR DE019-01235, PI: Kathy Chapman).
This database contains speech samples from 125 5-7 year olds along with multiple perceptual rating for each
speech sample. The clinicians participating in this study were successfully trained using a new protocol from
the Americleft Speech Outcomes Group and they exhibit excellent inter-clinician reliability.
In SA1 we will develop an AI algorithm that automatically learns the relationship between a
comprehensive set of speech acoustics and the average of the ASOG-trained expert ratings for each of the
four perceptual dimensions. This approach is based on technology that the PIs have successfully used to
evaluate dysarthric speech. Unique to these algorithms is modeling of perceptual judgments of trained experts
using tools from statistical signal processing and AI. The output of the algorithms will map to a clinically-
relevant scale, rather than to norm-referenced values that may or may not be meaningful. In SA2, we will
evaluate the tool on new data by collecting new speech samples using a mobile app at a partner clinic using
the same protocol as in the original study. Every collected sample will be further evaluated by ASOG trained
clinicians. We will use this data to evaluate the accuracy of the AI model by comparing the model's predictions
with the average of ASOG-trained experts. Preliminary results show promise that the proposed approach will
yield a successful tool for accurately characterizing perceptual dimensions in the speech of children with CLP.
These results indicate that a number of acoustic features that have been developed previously by the PIs
accurately capture differences in hypernasality and articulation between the speech of three children with CLP
(with varying severity). Furthermore, we show the success of our approach on a different, but related, task:
objective evaluation of dysarthric speech. We show that an algorithm that automatically rates hypernasality
performs on par with the judgment of human evaluators. The results of the proposed research will form the
basis for a subsequent R01 proposal for the development and evaluation of a clinical tool to objectively
quantify and track speech production in children with CLP.
在评估发音时,对鼻音过强的感知评估被认为是一个关键组成部分。
唇腭裂儿童(CLP)。然而,大多数言语语言病理学家(SLP)不接受
正式培训的感性评价的讲话,因此,研究表明,主观评级,
固有地偏向于感知者,并表现出相当大的可变性。在这个项目中,我们的目标是开发一个
一种人工智能(AI)算法,它自动评估语音沿着四个维度,
美国左派演讲结果组织(ASOG)的一个重要观点,即演讲的可接受性,
发音、鼻音过强和可听见的鼻排放。该项目中的AI算法基于现有的
作为NIH资助项目的一部分收集的语音数据库,通过以下方式开发可靠的语音结果:
通过培训临床医生提高感知评级的可靠性(NIDCR DE 019 -01235,PI:Kathy Chapman)。
该数据库包含来自125名5-7岁奥尔兹的语音样本,其中每个样本具有多个感知等级,沿着
语音样本。参与本研究的临床医生成功地接受了来自
Americleft Speech Outcomes Group,他们表现出出色的临床医生间可靠性。
在SA 1中,我们将开发一种AI算法,该算法可以自动学习
一套全面的语音声学和平均的ASOG训练的专家评级为每个
四个感知维度这种方法基于PI成功使用的技术,
评估构音障碍性言语。这些算法的独特之处在于对受过训练的专家的感知判断进行建模
使用统计信号处理和人工智能的工具。算法的输出将映射到临床上-
相关的规模,而不是正常参考值,可能会或可能不会有意义。在SA 2中,我们将
通过在合作伙伴诊所使用移动的应用程序收集新的语音样本,根据新数据评估工具,
与原始研究相同的方案。每个采集的样本将由经过培训的ASOG进行进一步评估
临床医生我们将使用这些数据,通过比较模型的预测来评估AI模型的准确性
与ASOG-trained专家的平均水平。初步结果表明,所提出的方法将
产生一个成功的工具,准确地表征知觉维度的儿童与CLP的讲话。
这些结果表明,一些声学特征,已开发的PI
准确捕捉三名CLP儿童言语中鼻音过重和发音清晰度的差异
(with不同的严重性)。此外,我们展示了我们的方法在不同但相关的任务上的成功:
构音障碍言语的客观评价。我们发现,一个算法,自动率hypernasality
与人类评估者的判断一致。拟议研究的结果将形成
为随后的R 01建议提供基础,以开发和评价临床工具,
量化和跟踪患有CLP的儿童的言语产生。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Visar Berisha其他文献
Visar Berisha的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Visar Berisha', 18)}}的其他基金
Improving communication outcomes in children with cleft palate in rural India
改善印度农村地区腭裂儿童的沟通效果
- 批准号:
10741579 - 财政年份:2023
- 资助金额:
$ 22.55万 - 项目类别:
Validating an objective assessment of speech outcomes of children with cleft palate pre and post secondary surgery
验证对腭裂儿童二次手术前后言语结果的客观评估
- 批准号:
10667250 - 财政年份:2022
- 资助金额:
$ 22.55万 - 项目类别:
Quantifying articulatory performance in children with dysarthria: Development of an automated metric for clinical use
量化构音障碍儿童的发音能力:开发用于临床的自动化指标
- 批准号:
10601094 - 财政年份:2022
- 资助金额:
$ 22.55万 - 项目类别:
Quantifying articulatory performance in children with dysarthria: Development of an automated metric for clinical use
量化构音障碍儿童的发音能力:开发用于临床的自动化指标
- 批准号:
10439252 - 财政年份:2022
- 资助金额:
$ 22.55万 - 项目类别:
The effects of telepractice technology on dysarthric speech evaluation
远程治疗技术对构音障碍言语评估的影响
- 批准号:
10383726 - 财政年份:2021
- 资助金额:
$ 22.55万 - 项目类别:
The effects of telepractice technology on dysarthric speech evaluation
远程治疗技术对构音障碍言语评估的影响
- 批准号:
10196408 - 财政年份:2021
- 资助金额:
$ 22.55万 - 项目类别:
A web-based platform for cross-linguistic research in dysarthric speech
构音障碍语音跨语言研究的网络平台
- 批准号:
8822436 - 财政年份:2015
- 资助金额:
$ 22.55万 - 项目类别:
A web-based platform for cross-linguistic research in dysarthric speech
构音障碍语音跨语言研究的网络平台
- 批准号:
8991676 - 财政年份:2015
- 资助金额:
$ 22.55万 - 项目类别:
Perception of dysarthric speech: An objective model of dysarthric speech evaluation with actionable outcomes
构音障碍言语的感知:具有可操作结果的构音障碍言语评估的客观模型
- 批准号:
9312085 - 财政年份:2004
- 资助金额:
$ 22.55万 - 项目类别:
Perception of dysarthric speech: An objective model of dysarthric speech evaluation with actionable outcomes
构音障碍言语的感知:具有可操作结果的构音障碍言语评估的客观模型
- 批准号:
9911475 - 财政年份:2004
- 资助金额:
$ 22.55万 - 项目类别:
相似海外基金
Nonlinear Acoustics for the conditioning monitoring of Aerospace structures (NACMAS)
用于航空航天结构调节监测的非线性声学 (NACMAS)
- 批准号:
10078324 - 财政年份:2023
- 资助金额:
$ 22.55万 - 项目类别:
BEIS-Funded Programmes
ORCC: Marine predator and prey response to climate change: Synthesis of Acoustics, Physiology, Prey, and Habitat In a Rapidly changing Environment (SAPPHIRE)
ORCC:海洋捕食者和猎物对气候变化的反应:快速变化环境中声学、生理学、猎物和栖息地的综合(蓝宝石)
- 批准号:
2308300 - 财政年份:2023
- 资助金额:
$ 22.55万 - 项目类别:
Continuing Grant
University of Salford (The) and KP Acoustics Group Limited KTP 22_23 R1
索尔福德大学 (The) 和 KP Acoustics Group Limited KTP 22_23 R1
- 批准号:
10033989 - 财政年份:2023
- 资助金额:
$ 22.55万 - 项目类别:
Knowledge Transfer Partnership
User-controllable and Physics-informed Neural Acoustics Fields for Multichannel Audio Rendering and Analysis in Mixed Reality Application
用于混合现实应用中多通道音频渲染和分析的用户可控且基于物理的神经声学场
- 批准号:
23K16913 - 财政年份:2023
- 资助金额:
$ 22.55万 - 项目类别:
Grant-in-Aid for Early-Career Scientists
Combined radiation acoustics and ultrasound imaging for real-time guidance in radiotherapy
结合辐射声学和超声成像,用于放射治疗的实时指导
- 批准号:
10582051 - 财政年份:2023
- 资助金额:
$ 22.55万 - 项目类别:
Comprehensive assessment of speech physiology and acoustics in Parkinson's disease progression
帕金森病进展中言语生理学和声学的综合评估
- 批准号:
10602958 - 财政年份:2023
- 资助金额:
$ 22.55万 - 项目类别:
The acoustics of climate change - long-term observations in the arctic oceans
气候变化的声学——北冰洋的长期观测
- 批准号:
2889921 - 财政年份:2023
- 资助金额:
$ 22.55万 - 项目类别:
Studentship
Collaborative Research: Estimating Articulatory Constriction Place and Timing from Speech Acoustics
合作研究:从语音声学估计发音收缩位置和时间
- 批准号:
2343847 - 财政年份:2023
- 资助金额:
$ 22.55万 - 项目类别:
Standard Grant
Flow Physics and Vortex-Induced Acoustics in Bio-Inspired Collective Locomotion
仿生集体运动中的流动物理学和涡激声学
- 批准号:
DGECR-2022-00019 - 财政年份:2022
- 资助金额:
$ 22.55万 - 项目类别:
Discovery Launch Supplement
Collaborative Research: Estimating Articulatory Constriction Place and Timing from Speech Acoustics
合作研究:从语音声学估计发音收缩位置和时间
- 批准号:
2141275 - 财政年份:2022
- 资助金额:
$ 22.55万 - 项目类别:
Standard Grant