NRI: FND: Robust Learning of Sequential Motion from Human Demonstrations to Enable Robot-Guided Exercise Training

NRI:FND:从人体演示中稳健地学习顺序运动,以实现机器人引导的运动训练

基本信息

  • 批准号:
    1830597
  • 负责人:
  • 金额:
    $ 75万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Standard Grant
  • 财政年份:
    2019
  • 资助国家:
    美国
  • 起止时间:
    2019-01-01 至 2024-09-30
  • 项目状态:
    已结题

项目摘要

Therapeutic exercises are crucial for healthy living and effective recovery from injury, surgery, disease, or frailty. Physical or occupational therapists are typically responsible for directing therapeutic exercises. There is currently a mismatch between supply and demand for these services. There is a predicted shortage of 26,600 physical therapists nationally by year 2025 and 50,000 occupational therapists by year 2030. Technology-based home programs are rapidly emerging as a way to combat this skilled labor shortage. Technology assisted exercise programs that promote highly structured practice and provide real-time feedback are believed to improve well-being but have yet to be conceived. This project bridges that gap through designing intelligent robots that can take the role of a therapist during therapeutic exercise training. The idea is that a clinician will teach a robot any structured exercise through demonstrations and the robot will then take the role of a coach to teach users and provide quantitative evaluation of performance. For a robot to do that, we need an intelligent algorithm that will allow the therapists to teach a robot any new exercise without actually programming the robot and enable the robot to learn from therapists' demonstrations. This project will develop a novel Learning from demonstration (LfD) framework to realize exercise trainer robots. The core technical challenges of designing a LfD framework for a exercise trainer robot are i) robustly learning sequence of human movements from lay users' demonstrations while accommodating inter- and intra-personal variations and ii) offers a quantitative metric to explain the deviation of a user's trajectory from the demonstrated sequence in a contextually meaningful way. Solving these challenges requires major changes in the way we currently learn motion trajectories (low-level policy learning) and model the relations among trajectories for learning sequential tasks (high-level policy learning). Accordingly, this research will design the entire pipeline of LfD based only on the kinematic and kinetic variables of motion. The core of this LfD framework is a phase space model (PSM) for learning task trajectories. The PSM leverages dynamic system theories to analyze motion variables to segment a task trajectory and build a parametric representation that is robust against spatio-temporal variations. The compact parameter set that PSM generates are used by a graphical model to learn the high-level policy underlying the demonstrated task while leveraging the typical anatomical constraints of human limbs. The same parameter set is used to design a quantitative metric to evaluate the learning outcome. The project will evaluate the fidelity of a co-robot exercise trainer powered by this LfD framework to teach upper extremity exercises in a series of user studies. An ABB YuMi robot will be used as the test platform. The demonstration data will be collected from inertial measurement units (IMUs) worn by student-therapists on the hand, forearm, upper arm and torso. The robot will demonstrate learned exercises to older adult participants (OA), who then will perform the exercises by mirroring the robot. During the training phase, the OAs will also be wearing IMUs so that their performance can be assessed with respect to the original demonstration from the therapist. The fidelity of movement transmission will be tested from the therapist, to the robot, to the patient with a high-speed, 3D motion capture video system which is the gold standard for kinematic analysis.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
治疗性运动对于健康的生活和有效地从受伤、手术、疾病或虚弱中恢复至关重要。物理或职业治疗师通常负责指导治疗练习。目前,这些服务的供求不匹配。预计到2025年全国将短缺26,600名物理治疗师,到2030年将短缺50,000名职业治疗师。以技术为基础的家庭计划正在迅速崛起,成为应对熟练劳动力短缺的一种方式。促进高度结构化的练习并提供实时反馈的技术辅助锻炼计划被认为可以改善健康,但尚未构思出来。该项目通过设计智能机器人来弥合这一差距,这些机器人可以在治疗性运动训练中扮演治疗师的角色。这个想法是,临床医生将通过演示教机器人任何结构化的运动,然后机器人将扮演教练的角色,教用户并提供定量的性能评估。为了让机器人做到这一点,我们需要一种智能算法,让治疗师能够教机器人任何新的练习,而无需实际对机器人进行编程,并使机器人能够从治疗师的演示中学习。该项目将开发一种新的从演示中学习(LfD)框架,以实现运动训练机器人。 设计用于运动训练器机器人的LfD框架的核心技术挑战是i)从外行用户的演示中鲁棒地学习人类运动的序列,同时适应个人间和个人内的变化,以及ii)提供定量度量以上下文有意义的方式解释用户的轨迹与演示序列的偏差。解决这些挑战需要我们目前学习运动轨迹(低级策略学习)的方式进行重大改变,并为学习顺序任务(高级策略学习)的轨迹之间的关系建模。因此,本研究将仅基于运动学和动力学变量设计LfD的整个管道。这个LfD框架的核心是学习任务轨迹的相空间模型(PSM)。PSM利用动态系统理论来分析运动变量,以分割任务轨迹并构建对时空变化具有鲁棒性的参数表示。PSM生成的紧凑参数集由图形模型用于学习演示任务的高级策略,同时利用人体肢体的典型解剖学约束。相同的参数集被用来设计一个量化的度量来评估学习结果。该项目将评估由该LfD框架驱动的合作机器人运动训练器的保真度,以在一系列用户研究中教授上肢运动。ABB YuMi机器人将用作测试平台。演示数据将从学生治疗师戴在手上、前臂、上臂和躯干上的惯性测量单元(伊穆斯)收集。机器人将向老年参与者(OA)展示学习过的练习,然后他们将通过镜像机器人来进行练习。在培训阶段,OA还将佩戴伊穆斯,以便根据治疗师的原始演示评估其表现。运动传输的保真度将通过高速3D运动捕捉视频系统进行测试,从治疗师到机器人,再到患者,这是运动分析的黄金标准。该奖项反映了NSF的法定使命,并通过使用基金会的智力价值和更广泛的影响审查标准进行评估,被认为值得支持。

项目成果

期刊论文数量(8)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Robust Behavior Cloning with Adversarial Demonstration Detection
LearningOptimizedHumanMotionviaPhaseSpaceAnalysis
通过相空间分析学习优化的人体运动
Self-Supervised Visual Motor Skills via Neural Radiance Fields
Learning Stable Dynamics via Iterative Quadratic Programming
Detecting Incorrect Visual Demonstrations for Improved Policy Learning
  • DOI:
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Mostafa Hussein;M. Begum
  • 通讯作者:
    Mostafa Hussein;M. Begum
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Momotaz Begum其他文献

An improved Kohonen self-organizing map clustering algorithm for high-dimensional data sets
一种改进的高维数据集Kohonen自组织图聚类算法
Recursive approach for multiple step-ahead software fault prediction through long short-term memory (LSTM)
通过长短期记忆 (LSTM) 进行多步超前软件故障预测的递归方法
Generation of Genetic Networks from a Small Number of Gene Expression Patterns under the Boolean Network Model
布尔网络模型下少量基因表达模式生成遗传网络
  • DOI:
  • 发表时间:
    2011
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Momotaz Begum;Md. Jakir Sumaya Kazary;Hossain;Sohag Kumar;Md. Rokon Bhadra;Uddin;Ω. M. J. Hossain;S. K. Bhadra
  • 通讯作者:
    S. K. Bhadra

Momotaz Begum的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Momotaz Begum', 18)}}的其他基金

CRII: CHS: Human-Robot Collaboration in Special Education: A Robot that Learns Service Delivery from Teachers' Demonstrations
CRII:CHS:特殊教育中的人机协作:从教师演示中学习服务交付的机器人
  • 批准号:
    1664554
  • 财政年份:
    2016
  • 资助金额:
    $ 75万
  • 项目类别:
    Continuing Grant
CRII: CHS: Human-Robot Collaboration in Special Education: A Robot that Learns Service Delivery from Teachers' Demonstrations
CRII:CHS:特殊教育中的人机协作:从教师演示中学习服务交付的机器人
  • 批准号:
    1464226
  • 财政年份:
    2015
  • 资助金额:
    $ 75万
  • 项目类别:
    Continuing Grant

相似国自然基金

Novosphingobium sp. FND-3降解呋喃丹的分子机制研究
  • 批准号:
    31670112
  • 批准年份:
    2016
  • 资助金额:
    62.0 万元
  • 项目类别:
    面上项目

相似海外基金

Movement perception in Functional Neurological Disorder (FND)
功能性神经疾病 (FND) 的运动感知
  • 批准号:
    MR/Y004000/1
  • 财政年份:
    2024
  • 资助金额:
    $ 75万
  • 项目类别:
    Research Grant
NRI: FND: Collaborative Research: DeepSoRo: High-dimensional Proprioceptive and Tactile Sensing and Modeling for Soft Grippers
NRI:FND:合作研究:DeepSoRo:软抓手的高维本体感受和触觉感知与建模
  • 批准号:
    2348839
  • 财政年份:
    2023
  • 资助金额:
    $ 75万
  • 项目类别:
    Standard Grant
S&AS: FND: COLLAB: Planning and Control of Heterogeneous Robot Teams for Ocean Monitoring
S
  • 批准号:
    2311967
  • 财政年份:
    2022
  • 资助金额:
    $ 75万
  • 项目类别:
    Standard Grant
NRI: FND: Collaborative Research: DeepSoRo: High-dimensional Proprioceptive and Tactile Sensing and Modeling for Soft Grippers
NRI:FND:合作研究:DeepSoRo:软抓手的高维本体感受和触觉感知与建模
  • 批准号:
    2024882
  • 财政年份:
    2021
  • 资助金额:
    $ 75万
  • 项目类别:
    Standard Grant
NRI: FND: Collaborative Research: DeepSoRo: High-dimensional Proprioceptive and Tactile Sensing and Modeling for Soft Grippers
NRI:FND:合作研究:DeepSoRo:软抓手的高维本体感受和触觉感知与建模
  • 批准号:
    2024646
  • 财政年份:
    2021
  • 资助金额:
    $ 75万
  • 项目类别:
    Standard Grant
NRI: FND: Foundations for Physical Co-Manipulation with Mixed Teams of Humans and Soft Robots
NRI:FND:人类和软机器人混合团队物理协同操作的基础
  • 批准号:
    2024792
  • 财政年份:
    2021
  • 资助金额:
    $ 75万
  • 项目类别:
    Standard Grant
NRI: FND: Foundations for Physical Co-Manipulation with Mixed Teams of Humans and Soft Robots
NRI:FND:人类和软机器人混合团队物理协同操作的基础
  • 批准号:
    2024670
  • 财政年份:
    2021
  • 资助金额:
    $ 75万
  • 项目类别:
    Standard Grant
NRI: FND: Natural Power Transmission through Unconstrained Fluids for Robotic Manipulation
NRI:FND:通过不受约束的流体进行自然动力传输,用于机器人操作
  • 批准号:
    2024409
  • 财政年份:
    2020
  • 资助金额:
    $ 75万
  • 项目类别:
    Standard Grant
NRI: FND: Multi-Manipulator Extensible Robotic Platforms
NRI:FND:多机械手可扩展机器人平台
  • 批准号:
    2024435
  • 财政年份:
    2020
  • 资助金额:
    $ 75万
  • 项目类别:
    Standard Grant
Collaborative Research: NRI: FND: Flying Swarm for Safe Human Interaction in Unstructured Environments
合作研究:NRI:FND:用于非结构化环境中安全人类互动的飞群
  • 批准号:
    2024615
  • 财政年份:
    2020
  • 资助金额:
    $ 75万
  • 项目类别:
    Standard Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了