CAREER: Robot Perception of Human Physical Skills

职业:机器人感知人类身体技能

基本信息

  • 批准号:
    2143576
  • 负责人:
  • 金额:
    $ 54.36万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Continuing Grant
  • 财政年份:
    2022
  • 资助国家:
    美国
  • 起止时间:
    2022-06-01 至 2027-05-31
  • 项目状态:
    未结题

项目摘要

This award is funded in part under the American Rescue Plan Act of 2021 (Public Law 117-2).Everyday human activities are impressive feats of physical intelligence - from careful placement of feet to avoid obstacles when walking, to the precise and highly coordinated movement of fingers to type a sentence. Robots with even a fraction of human physical intelligence could revolutionize lives by automating repetitive tasks. Despite advances however, robots with such physical abilities remain elusive. This project takes a step towards more capable robots by building 3D computer vision and machine learning algorithms for automatically analyzing human skills from large-scale image and video collections readily available on the internet or captured in the wild. It will produce a large repository of high-level physical skills that can then be transferred to robots. The education and outreach activities of the project will impart theoretical knowledge in robot perception and provide practical experience to graduates, undergraduates, and high school students. Furthermore, the project will lead to advances in computer vision-based understanding of human physical skills, in-the-wild capture of a significant amount of skills data and help to solve problems outside of CS such as in the study of the neuroscience of hand manipulation in monkeys.To meet the research goals, the project will advance the state of the art in computer vision-based modeling and estimation of human physical skills from large-scale visual data. Existing methods are limited to operating in structured environments and cannot capture interactions in unconstrained visual data taken in cluttered environments like homes. To address this limitation, the project will build (1) neural networks to model and estimate human physical properties such as shape and articulation from unconstrained data, (2) neural networks that model and estimate human motion and interaction from videos, and (3) methods for gathering and analyzing large amounts (10,000 person-hours) of unconstrained videos of human activities to build a repository of physical skills. This repository will inform the transfer of skills from humans to robots. The long-term aim of this research is to demonstrate that learning from images and videos is a viable path for robots to gain human-like physical abilities. To meet the education and outreach goals, the project will integrate theory and practice by acquiring several cameras and robot arms to teach an advanced course, a semester-long undergraduate research experience program, and a virtual workshop program.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
该奖项的部分资金来自2021年美国救援计划法案(公法117-2)。人类的日常活动都是令人印象深刻的身体智能壮举-从走路时小心放置脚以避开障碍物,到手指精确和高度协调的运动以键入一个句子。机器人即使只有人类身体智能的一小部分,也可以通过自动化重复性任务来彻底改变生活。然而,尽管取得了进步,具有这种物理能力的机器人仍然难以捉摸。该项目通过构建3D计算机视觉和机器学习算法,自动分析互联网上或野外捕获的大规模图像和视频集合中的人类技能,向更有能力的机器人迈出了一步。它将产生一个大型的高级身体技能库,然后可以转移到机器人身上。该项目的教育和推广活动将传授机器人感知的理论知识,并为研究生、本科生和高中生提供实践经验。此外,该项目还将推动基于计算机视觉的人类身体技能理解的进步,在野外捕获大量技能数据,并帮助解决CS之外的问题,例如猴子手部操作的神经科学研究。为了实现研究目标,该项目将推进基于计算机视觉的建模和从大规模视觉数据估计人类身体技能的最新技术水平。现有的方法仅限于在结构化环境中操作,并且不能捕获在杂乱环境(如家庭)中获取的不受约束的视觉数据中的交互。为了解决这一限制,该项目将建立(1)神经网络,以从无约束数据中建模和估计人体物理特性,如形状和关节,(2)神经网络,从视频中建模和估计人体运动和互动,以及(3)收集和分析大量数据的方法。(10,000人小时)的人类活动的不受约束的视频,以建立一个物理技能库。这个知识库将为人类向机器人转移技能提供信息。这项研究的长期目标是证明从图像和视频中学习是机器人获得类似人类身体能力的可行途径。为了实现教育和推广目标,该项目将通过购买几台摄像机和机器人手臂来教授高级课程,为期一个学期的本科生研究体验计划和虚拟研讨会计划,将理论与实践结合起来。该奖项反映了NSF的法定使命,并通过使用基金会的智力价值和更广泛的影响审查标准进行评估,被认为值得支持。

项目成果

期刊论文数量(3)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
MANUS: Markerless Hand-Object Grasp Capture using Articulated 3D Gaussians
MANUS:使用铰接 3D 高斯的无标记手部物体抓取捕获
DiVa-360: The Dynamic Visual Dataset for Immersive Neural Fields
  • DOI:
  • 发表时间:
    2023-07
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Chengkun Lu;Peisen Zhou;Angela Xing;Chandradeep Pokhariya;Arnab Dey;Ishaan Shah;Rugved Mavidipalli;Dylan Hu;Andrew I. Comport;Kefan Chen;Srinath Sridhar
  • 通讯作者:
    Chengkun Lu;Peisen Zhou;Angela Xing;Chandradeep Pokhariya;Arnab Dey;Ishaan Shah;Rugved Mavidipalli;Dylan Hu;Andrew I. Comport;Kefan Chen;Srinath Sridhar
HyP-NeRF: Learning Improved NeRF Priors using a HyperNetwork
  • DOI:
    10.48550/arxiv.2306.06093
  • 发表时间:
    2023-06
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Bipasha Sen;Gaurav Singh;Aditya Agarwal;Rohith Agaram;K. Krishna;Srinath Sridhar
  • 通讯作者:
    Bipasha Sen;Gaurav Singh;Aditya Agarwal;Rohith Agaram;K. Krishna;Srinath Sridhar
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Srinath Sridhar其他文献

OF ARCHITECTURAL SCENES USING A HIERARCHICAL METHOD
使用分层方法的建筑场景
  • DOI:
  • 发表时间:
    2010
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Nitin Jain;Srinath Sridhar
  • 通讯作者:
    Srinath Sridhar
Semantic Attention Flow Fields for Dynamic Scene Decomposition
动态场景分解的语义注意力流场
  • DOI:
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Yiqing Liang;Eliot Laidlaw;Alexander Meyerowitz;Srinath Sridhar;James Tompkin
  • 通讯作者:
    James Tompkin
Investigating the Dexterity of Multi-Finger Input for Mid-Air Text Entry
研究空中文本输入的多指输入的灵活性
Supplementary Document for CLIP-Sculptor
CLIP-Sculptor 的补充文档
  • DOI:
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Aditya Sanghi;Rao Fu;Vivian Liu;Karl D. D. Willis;Hooman Shayani;A. Khasahmadi;Srinath Sridhar;Daniel Ritchie
  • 通讯作者:
    Daniel Ritchie
Supplementary Material for “Predicting the Physical Dynamics of Unseen 3D Objects”
“预测看不见的 3D 物体的物理动力学”的补充材料
  • DOI:
  • 发表时间:
    2020
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Davis Rempe;Srinath Sridhar;He Wang;Leonidas Guibas
  • 通讯作者:
    Leonidas Guibas

Srinath Sridhar的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

相似海外基金

NRI: Enhancing Autonomous Underwater Robot Perception for Aquatic Species Management
NRI:增强自主水下机器人感知以进行水生物种管理
  • 批准号:
    2220956
  • 财政年份:
    2023
  • 资助金额:
    $ 54.36万
  • 项目类别:
    Standard Grant
FRR: Collaborative Research: Unsupervised Active Learning for Aquatic Robot Perception and Control
FRR:协作研究:用于水生机器人感知和控制的无监督主动学习
  • 批准号:
    2237577
  • 财政年份:
    2023
  • 资助金额:
    $ 54.36万
  • 项目类别:
    Standard Grant
FRR: Collaborative Research: Unsupervised Active Learning for Aquatic Robot Perception and Control
FRR:协作研究:用于水生机器人感知和控制的无监督主动学习
  • 批准号:
    2237576
  • 财政年份:
    2023
  • 资助金额:
    $ 54.36万
  • 项目类别:
    Standard Grant
Next Generation Robot Perception Systems
下一代机器人感知系统
  • 批准号:
    RGPIN-2020-04659
  • 财政年份:
    2022
  • 资助金额:
    $ 54.36万
  • 项目类别:
    Discovery Grants Program - Individual
Learning-Aided Integrated Control and Semantic Perception Architecture for Legged Robot Locomotion and Navigation in the Wild
用于腿式机器人野外运动和导航的学习辅助集成控制和语义感知架构
  • 批准号:
    2118818
  • 财政年份:
    2021
  • 资助金额:
    $ 54.36万
  • 项目类别:
    Standard Grant
Next Generation Robot Perception Systems
下一代机器人感知系统
  • 批准号:
    RGPIN-2020-04659
  • 财政年份:
    2021
  • 资助金额:
    $ 54.36万
  • 项目类别:
    Discovery Grants Program - Individual
Research development of the super leading safe and smart next generation robot based on breakthrough superior wideband force perception and outstanding robot artificial intelligence
基于突破性的卓越宽带力感知和卓越的机器人人工智能研发超领先的安全智能下一代机器人
  • 批准号:
    20K14713
  • 财政年份:
    2020
  • 资助金额:
    $ 54.36万
  • 项目类别:
    Grant-in-Aid for Early-Career Scientists
Active Robot Perception for Automated Potato Planting
自动马铃薯种植的主动机器人感知
  • 批准号:
    2457936
  • 财政年份:
    2020
  • 资助金额:
    $ 54.36万
  • 项目类别:
    Studentship
Next Generation Robot Perception Systems
下一代机器人感知系统
  • 批准号:
    RGPIN-2020-04659
  • 财政年份:
    2020
  • 资助金额:
    $ 54.36万
  • 项目类别:
    Discovery Grants Program - Individual
From Action Perception to Joint Actions: Learning from Joint Handover Actions of Human Dyads for Robotic Actions and Human-Robot interactions (A01)
从动作感知到联合动作:从人类二元组的联合切换动作中学习机器人动作和人机交互(A01)
  • 批准号:
    437121936
  • 财政年份:
    2020
  • 资助金额:
    $ 54.36万
  • 项目类别:
    Collaborative Research Centres
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了