CRCNS: Avian Model for Neural Activity Driven Speech Prostheses
CRCNS:神经活动驱动言语假肢的鸟类模型
基本信息
- 批准号:9981725
- 负责人:
- 金额:$ 34.47万
- 依托单位:
- 依托单位国家:美国
- 项目类别:
- 财政年份:2019
- 资助国家:美国
- 起止时间:2019-08-01 至 2024-07-31
- 项目状态:已结题
- 来源:
- 关键词:AcousticsAlgorithmsAnatomyAnimal ModelAnimalsAreaArtificial IntelligenceBasic ScienceBehaviorBehavioralBirdsBrainCell NucleusCharacteristicsClinical ResearchClinical assessmentsCommunicationCommunitiesComplementComplexComprehensionComputer InterfaceComputer softwareComputersCourse ContentDataData ScienceData SetDevelopmentDiagnosisDiseaseEducational MaterialsEducational workshopElectrodesEngineeringEvaluationFeedbackFinchesFutureGenerationsGoalsHigh School OutreachHigh School StudentHumanImplantIndividualInfrastructureInjuryInstructionKnowledgeLanguageLanguage DisordersLarynxLearning ModuleLimb ProsthesisLimb structureLimesMachine LearningMapsMethodsModelingMotorMotor CortexNeurodegenerative DisordersNeurosciencesOutcome MeasureOutputPatientsPerformancePlayPreventionPrincipal InvestigatorProductionProsthesisQuadriplegiaRecording of previous eventsResearchRobotRoleRunningSelf-Help DevicesSignal TransductionSongbirdsSpace ModelsSpeechSpeech DevelopmentStudentsSystemTechniquesTechnologyTestingTimeTranslatingTranslationsUpper ExtremityVoiceWorkauditory feedbackbasebird songbrain computer interfacebrain machine interfacedata sharingdesigneffectiveness testingfunctional electrical stimulationfunctional restorationgraduate studenthackathonhigh schoolhuman subjectimprovedlarge datasetsmachine learning algorithmmeetingsmind controlmodel developmentmotor controlmultidisciplinaryneural modelneural prosthesisneurodevelopmentneurophysiologyneurotransmissionnonhuman primatenovelopen sourceoperationprogramsrelating to nervous systemrepositoryresponsesignal processingsuccessundergraduate studentvocal learningvocalizationweb site
项目摘要
Understanding the physical, computational, and theoretical bases of human vocal communication, speech,
is crucial to improved comprehension of voice, speech and language diseases and disorders, and
improving their diagnosis, treatment and prevention. Meeting this challenge requires knowledge of the
neural and sensorimotor mechanisms of vocal motor control. Our project will directly investigate the neural
and sensorimotor mechanisms involved in the production of complex, natural, vocal communication
signals. Our results will directly enhance brain-computer interface technology for communication and will
accelerate the development of prostheses and other assistive technologies for individuals with
communications deficits due to injury or disease. We will develop a vocal prosthetic that directly translates
neural signals in cortical sensorimotor and vocal-motor control regions into vocal communication signals
output in real-time. Building on success using non-human primates for brain computer interfaces for
general motor control, the prosthetic will be developed in songbirds, whose acoustically rich, learned
vocalizations share many features with human speech. Because the songbird vocal apparatus is
functipnally and anatomically similar to the human larynx, and the cortical regions that control it are closely
analogous to speech motor-control areas of the human brain, songbirds offer an ideal model for the
proposed studies. Beyond the application of our work to human voice and speech, development of the
vocal prosthetic will enable novel speech-relevant studies in the songbird model that can reveal
fundamental mechanisms of vocal learning and production. In the first stage of the project, we collect a
large data set of simultaneously recorded neural activity and vocalizations. In stage two, we will apply
machine learning and artificial intelligence techniques to develop algorithms that map neural recordings to
vocal output and enable us to estimate intended vocalizations directly from neural data. In stage three, we
will develop computing infrastructure to run these algorithms in real-time, predicting intended vocalizations
from neural activity as the animal is actively producing these vocalizations. In stage four, we will test the
effectiveness of the prosthetic by substituting the bird's own vocalization with the output from our prosthetic
system. Success will set the stage for testing of these technologies in humans and translation to multiple
assistive devices. In addition to our research goals, the project will engage graduate, undergraduate, and
high school students through the development of novel educational modules that introduce students to
brain machine interface and multidisciplinary studies that span engineering and the basic sciences.
RELEVANCE (See instructions):
Developing a vocal prosthesis will directly enhance brain-computer interface technology for communication
and accelerate the realization of prostheses and other assistive technologies for individuals with
communications deficits due to injury or disease. The basic knowledge of the neural and sensorimotor
mechanisms of vocal motor control acquired will impact understanding of multiple voice, speech, and
language diseases and disorders. The techniques developed will enabling novel future studies of vocal
production and development.
了解人类声音交流、语音、
对于提高对声音、言语和语言疾病和障碍的理解至关重要,并且
改善他们的诊断、治疗和预防。应对这一挑战需要了解
发声运动控制的神经和感觉运动机制。我们的项目将直接研究神经网络
和感觉运动机制参与复杂、自然、声音交流的产生
信号。我们的成果将直接增强脑机接口通信技术,并将
加速为患有此类疾病的人开发假肢和其他辅助技术
由于受伤或疾病而导致的沟通障碍。我们将开发一种可以直接翻译的发声假肢
皮质感觉运动和发声运动控制区域的神经信号转化为发声通讯信号
实时输出。建立在使用非人类灵长类动物脑机接口成功的基础上
一般运动控制,假肢将在鸣禽身上开发,它们的声音丰富,学识渊博
发声与人类言语有许多共同特征。因为鸣禽的发声器官是
功能和解剖学上与人类喉部相似,并且控制它的皮质区域也密切相关
与人脑的言语运动控制区域类似,鸣禽为大脑提供了一个理想的模型。
拟议的研究。除了我们的工作应用于人类声音和言语之外,
声乐假肢将使鸣禽模型中与言语相关的新研究成为可能,该研究可以揭示
声音学习和产生的基本机制。在该项目的第一阶段,我们收集了
同时记录的神经活动和发声的大数据集。在第二阶段,我们将申请
机器学习和人工智能技术来开发将神经记录映射到的算法
声音输出并使我们能够直接从神经数据估计预期的发声。在第三阶段,我们
将开发计算基础设施来实时运行这些算法,预测预期的发声
来自动物积极发出这些声音时的神经活动。在第四阶段,我们将测试
通过用我们假肢的输出代替鸟自己的发声来验证假肢的有效性
系统。成功将为这些技术在人类身上的测试和转化为多种技术奠定基础。
辅助设备。除了我们的研究目标外,该项目还将吸引研究生、本科生和
通过开发新颖的教育模块向高中生介绍
脑机接口和跨越工程和基础科学的多学科研究。
相关性(参见说明):
开发发声假体将直接增强脑机接口技术的通信能力
并加速为患有此类疾病的人提供假肢和其他辅助技术
由于受伤或疾病而导致的沟通障碍。神经和感觉运动的基础知识
获得的发声运动控制机制将影响对多种声音、言语和语言的理解
语言疾病和障碍。开发的技术将使声乐的新颖的未来研究成为可能
生产和开发。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
TIMOTHY Q GENTNER其他文献
TIMOTHY Q GENTNER的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('TIMOTHY Q GENTNER', 18)}}的其他基金
Temporal Pattern Perception Mechanisms for Acoustic Communication
声音交流的时间模式感知机制
- 批准号:
10160864 - 财政年份:2019
- 资助金额:
$ 34.47万 - 项目类别:
CRCNS: Avian Model for Neural Activity Driven Speech Prostheses
CRCNS:神经活动驱动言语假肢的鸟类模型
- 批准号:
10216216 - 财政年份:2019
- 资助金额:
$ 34.47万 - 项目类别:
Temporal Pattern Perception Mechanisms for Acoustic Communication
声音交流的时间模式感知机制
- 批准号:
10407633 - 财政年份:2019
- 资助金额:
$ 34.47万 - 项目类别:
CRCNS: Avian Model for Neural Activity Driven Speech Prostheses
CRCNS:神经活动驱动言语假肢的鸟类模型
- 批准号:
10408524 - 财政年份:2019
- 资助金额:
$ 34.47万 - 项目类别:
CRCNS: Avian Model for Neural Activity Driven Speech Prostheses
CRCNS:神经活动驱动言语假肢的鸟类模型
- 批准号:
9916239 - 财政年份:2019
- 资助金额:
$ 34.47万 - 项目类别:
Temporal Pattern Perception Mechanisms for Acoustic Communication
声音交流的时间模式感知机制
- 批准号:
10624335 - 财政年份:2019
- 资助金额:
$ 34.47万 - 项目类别:
Temporal Pattern Perception Mechanisms for Acoustic Communication
声音交流的时间模式感知机制
- 批准号:
9803507 - 财政年份:2019
- 资助金额:
$ 34.47万 - 项目类别:
CRCNS: Avian Model for Neural Activity Driven Speech Prostheses
CRCNS:神经活动驱动言语假肢的鸟类模型
- 批准号:
10452530 - 财政年份:2019
- 资助金额:
$ 34.47万 - 项目类别:
CRCNS: Avian Model for Neural Activity Driven Speech Prostheses
CRCNS:神经活动驱动言语假肢的鸟类模型
- 批准号:
10671028 - 财政年份:2019
- 资助金额:
$ 34.47万 - 项目类别:
Neural mechanisms of auditory temporal pattern perception
听觉时间模式感知的神经机制
- 批准号:
9527903 - 财政年份:2017
- 资助金额:
$ 34.47万 - 项目类别:
相似海外基金
DMS-EPSRC: Asymptotic Analysis of Online Training Algorithms in Machine Learning: Recurrent, Graphical, and Deep Neural Networks
DMS-EPSRC:机器学习中在线训练算法的渐近分析:循环、图形和深度神经网络
- 批准号:
EP/Y029089/1 - 财政年份:2024
- 资助金额:
$ 34.47万 - 项目类别:
Research Grant
CAREER: Blessing of Nonconvexity in Machine Learning - Landscape Analysis and Efficient Algorithms
职业:机器学习中非凸性的祝福 - 景观分析和高效算法
- 批准号:
2337776 - 财政年份:2024
- 资助金额:
$ 34.47万 - 项目类别:
Continuing Grant
CAREER: From Dynamic Algorithms to Fast Optimization and Back
职业:从动态算法到快速优化并返回
- 批准号:
2338816 - 财政年份:2024
- 资助金额:
$ 34.47万 - 项目类别:
Continuing Grant
CAREER: Structured Minimax Optimization: Theory, Algorithms, and Applications in Robust Learning
职业:结构化极小极大优化:稳健学习中的理论、算法和应用
- 批准号:
2338846 - 财政年份:2024
- 资助金额:
$ 34.47万 - 项目类别:
Continuing Grant
CRII: SaTC: Reliable Hardware Architectures Against Side-Channel Attacks for Post-Quantum Cryptographic Algorithms
CRII:SaTC:针对后量子密码算法的侧通道攻击的可靠硬件架构
- 批准号:
2348261 - 财政年份:2024
- 资助金额:
$ 34.47万 - 项目类别:
Standard Grant
CRII: AF: The Impact of Knowledge on the Performance of Distributed Algorithms
CRII:AF:知识对分布式算法性能的影响
- 批准号:
2348346 - 财政年份:2024
- 资助金额:
$ 34.47万 - 项目类别:
Standard Grant
CRII: CSR: From Bloom Filters to Noise Reduction Streaming Algorithms
CRII:CSR:从布隆过滤器到降噪流算法
- 批准号:
2348457 - 财政年份:2024
- 资助金额:
$ 34.47万 - 项目类别:
Standard Grant
EAGER: Search-Accelerated Markov Chain Monte Carlo Algorithms for Bayesian Neural Networks and Trillion-Dimensional Problems
EAGER:贝叶斯神经网络和万亿维问题的搜索加速马尔可夫链蒙特卡罗算法
- 批准号:
2404989 - 财政年份:2024
- 资助金额:
$ 34.47万 - 项目类别:
Standard Grant
CAREER: Efficient Algorithms for Modern Computer Architecture
职业:现代计算机架构的高效算法
- 批准号:
2339310 - 财政年份:2024
- 资助金额:
$ 34.47万 - 项目类别:
Continuing Grant
CAREER: Improving Real-world Performance of AI Biosignal Algorithms
职业:提高人工智能生物信号算法的实际性能
- 批准号:
2339669 - 财政年份:2024
- 资助金额:
$ 34.47万 - 项目类别:
Continuing Grant














{{item.name}}会员




