RI: Medium: Collaborative Research: Towards Practical Encoderless Robotics Through Vision-Based Training and Adaptation

RI:中:协作研究:通过基于视觉的训练和适应实现实用的无编码机器人技术

基本信息

  • 批准号:
    1900681
  • 负责人:
  • 金额:
    $ 42.5万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Standard Grant
  • 财政年份:
    2019
  • 资助国家:
    美国
  • 起止时间:
    2019-08-15 至 2022-07-31
  • 项目状态:
    已结题

项目摘要

As robots branch out into unstructured and dynamic human environments (such as homes, offices, and hospitals), they require a new design methodology. These robots need to be safe to operate next to humans; they are expected to handle frequent changes and uncertainties that are inherent in human environments; and they should be as inexpensive as possible to enable wide-spread dissemination. Such criteria have lead to the emergence of compliant/soft robots, 3D printed robots, and inexpensive consumer-grade hardware, all of which constitute a major shift from heavy and rigid robots with tight tolerances used in industry. Traditional measurement devices that are suitable for sensing and controlling the motion of the rigid robots, i.e. joint encoders, are incompatible or impractical for many of these new types of robot. Alternative approaches that do not rely on encoders are largely missing from robotics technology and must be developed for these novel design models. This project investigates ways of using only cameras for sensing and controlling the robot's motion. Vision-based algorithms for robotic walking, object grasping and manipulation will be derived. Such algorithms will not only enable the use of the new-wave robots in unstructured environments but will also significantly lower the cost of traditional robotic systems, and therefore, boost their dissemination for industry and educational purposes.The project will focus on utilizing vision-based estimation schemes and learning methods for acquiring both robot configuration information and task models within a framework where modeling inaccuracies and environment uncertainties are dealt with by robust visual servoing approaches. Visual observations will be used to model the relationship between actuator inputs, manipulator configuration, and task states, and they will be combined with adaptive vision-based control schemes that are robust to modeling uncertainties and disturbances. The framework will fundamentally rely on using convolutional neural networks (CNNs) to build the models from observation alone, both for a low-dimensional representation of configuration and for an image segmentation of the manipulator. Reinforcement learning methods will also be applied to assess the practicality of a modular combination of such methods with the offline learned representations to perform complex positioning and control tasks. These approaches will be evaluated in the context of within-hand manipulation, compliant surgical tool control, locomotion of a 3D-printed multi-legged robot, and force-controlled grasping and peg-insertion using a soft continuum manipulator. The contributions of our proposed work are that no prior model of a robot's configuration is needed because it is explicitly observed and inferred up-front (system identification); uncertainty affecting task performance is addressed by adapting the robot dynamics on-the-fly (model-through-confirmation); and the broad applicability of our methods will be demonstrated through application to a wide variety of platforms. Work done on this project will help to enable lower cost robotic and mechatronic hardware across a range of domains and will particularly impact the ability to control compliant and under-actuated structures.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
随着机器人分支进入非结构化和动态的人类环境(如家庭、办公室和医院),它们需要一种新的设计方法。这些机器人需要在人类旁边安全地操作;它们应该能够处理人类环境中固有的频繁变化和不确定性;它们应该尽可能便宜,以便能够广泛传播。这些标准导致了柔性/柔性机器人、3D打印机器人和廉价的消费级硬件的出现,所有这些都构成了工业中使用的具有严格公差的重型和刚性机器人的重大转变。传统的测量设备,适用于传感和控制的刚性机器人的运动,即关节编码器,是不兼容的或不切实际的许多这些新类型的机器人。不依赖于编码器的替代方法在很大程度上缺失了机器人技术,必须为这些新颖的设计模型开发。本计画探讨仅使用摄影机来感测及控制机器人动作之方法。将推导出用于机器人行走、物体抓取和操纵的基于视觉的算法。这种算法不仅可以在非结构化环境中使用新浪潮机器人,而且还可以显著降低传统机器人系统的成本,因此,促进其在工业和教育方面的传播。该项目将侧重于利用视觉-用于在建模不准确性和环境不确定性的框架内获取机器人配置信息和任务模型的基于估计的方案和学习方法通过鲁棒视觉伺服方法来处理不确定性。视觉观察将被用来模拟执行器输入,机械手配置和任务状态之间的关系,他们将结合自适应视觉为基础的控制方案,是鲁棒的建模不确定性和干扰。该框架将从根本上依赖于使用卷积神经网络(CNN)来仅从观察中构建模型,用于配置的低维表示和操纵器的图像分割。强化学习方法也将被应用于评估这种方法与离线学习表示的模块化组合的实用性,以执行复杂的定位和控制任务。这些方法将在手内操纵,顺应性手术工具控制,3D打印多足机器人的运动以及使用软连续体操纵器的力控制抓取和插钉的背景下进行评估。我们提出的工作的贡献是,没有事先模型的机器人的配置是必要的,因为它是明确观察和推断的前面(系统识别);不确定性影响任务性能的解决,通过调整机器人动态的飞行(模型通过确认);我们的方法的广泛适用性将通过应用到各种各样的平台。该项目所做的工作将有助于在一系列领域实现低成本的机器人和机电一体化硬件,特别是对控制合规和欠驱动结构的能力产生影响。该奖项反映了NSF的法定使命,并通过使用基金会的智力价值和更广泛的影响审查标准进行评估,被认为值得支持。

项目成果

期刊论文数量(6)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Force-Based Simultaneous Mapping and Object Reconstruction for Robotic Manipulation
  • DOI:
    10.1109/lra.2022.3152244
  • 发表时间:
    2022-04
  • 期刊:
  • 影响因子:
    5.2
  • 作者:
    João Bimbo;A. S. Morgan;A. Dollar
  • 通讯作者:
    João Bimbo;A. S. Morgan;A. Dollar
Manipulation for self-Identification, and self-Identification for better manipulation
  • DOI:
    10.1126/scirobotics.abe1321
  • 发表时间:
    2021-05-26
  • 期刊:
  • 影响因子:
    25
  • 作者:
    Hang, Kaiyu;Bircher, Walter G.;Dollar, Aaron M.
  • 通讯作者:
    Dollar, Aaron M.
Complex manipulation with a simple robotic hand through contact breaking and caging
  • DOI:
    10.1126/scirobotics.abd2666
  • 发表时间:
    2021-05-12
  • 期刊:
  • 影响因子:
    25
  • 作者:
    Bircher, Walter G.;Morgan, Andrew S.;Dollar, Aaron M.
  • 通讯作者:
    Dollar, Aaron M.
Complex In-Hand Manipulation Via Compliance-Enabled Finger Gaiting and Multi-Modal Planning
通过合规性手指步态和多模式规划进行复杂的手动操作
  • DOI:
    10.1109/lra.2022.3145961
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    5.2
  • 作者:
    Morgan, Andrew;Hang, Kaiyu;Wen, Bowen;Bekris, Kostas E;Dollar, Aaron
  • 通讯作者:
    Dollar, Aaron
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Aaron Dollar其他文献

Aaron Dollar的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Aaron Dollar', 18)}}的其他基金

Collaborative Research: Self-Identification for Robot Manipulation under Uncertainty Aided by Passive Adaptability
协作研究:被动适应性辅助的不确定性下机器人操纵的自我识别
  • 批准号:
    2132823
  • 财政年份:
    2022
  • 资助金额:
    $ 42.5万
  • 项目类别:
    Standard Grant
FW-HTF-RL: Collaborative Research: Shared Autonomy for the Dull, Dirty, and Dangerous: Exploring Division of Labor for Humans and Robots to Transform the Recycling Sorting Industry
FW-HTF-RL:协作研究:沉闷、肮脏和危险的共享自治:探索人类和机器人的分工以改变回收分类行业
  • 批准号:
    1928448
  • 财政年份:
    2019
  • 资助金额:
    $ 42.5万
  • 项目类别:
    Standard Grant
EFRI C3 SoRo: Muscle-like Cellular Architectures and Compliant, Distributed Sensing and Control for Soft Robots
EFRI C3 SoRo:软机器人的类肌肉细胞架构和兼容的分布式传感和控制
  • 批准号:
    1832795
  • 财政年份:
    2018
  • 资助金额:
    $ 42.5万
  • 项目类别:
    Standard Grant
NRI: INT: COLLAB: Integrated Modeling and Learning for Robust Grasping and Dexterous Manipulation with Adaptive Hands
NRI:INT:COLLAB:利用自适应手实现稳健抓取和灵巧操作的集成建模和学习
  • 批准号:
    1734190
  • 财政年份:
    2017
  • 资助金额:
    $ 42.5万
  • 项目类别:
    Standard Grant
NRI: Rethinking Multi-Legged Robots: Passive Terrain Adaptability through Underactuated Mechanisms and Exactly-Constrained Kinematics
NRI:重新思考多足机器人:通过欠驱动机构和精确约束运动学实现被动地形适应性
  • 批准号:
    1637647
  • 财政年份:
    2016
  • 资助金额:
    $ 42.5万
  • 项目类别:
    Standard Grant
NRI: Small: Dexterous Manipulation with Underactuated Hands: Strategies, Control Primitives, and Design for Open-Source Hardware
NRI:小:用欠驱动的手进行灵巧操纵:策略、控制原语和开源硬件设计
  • 批准号:
    1317976
  • 财政年份:
    2013
  • 资助金额:
    $ 42.5万
  • 项目类别:
    Standard Grant
CAREER: Underactuacted Precision Robotic Grasping and Manipulation
职业:欠驱动精密机器人抓取和操纵
  • 批准号:
    0953856
  • 财政年份:
    2010
  • 资助金额:
    $ 42.5万
  • 项目类别:
    Continuing Grant

相似海外基金

Collaborative Research: RI: Medium: Principles for Optimization, Generalization, and Transferability via Deep Neural Collapse
合作研究:RI:中:通过深度神经崩溃实现优化、泛化和可迁移性的原理
  • 批准号:
    2312841
  • 财政年份:
    2023
  • 资助金额:
    $ 42.5万
  • 项目类别:
    Standard Grant
Collaborative Research: RI: Medium: Principles for Optimization, Generalization, and Transferability via Deep Neural Collapse
合作研究:RI:中:通过深度神经崩溃实现优化、泛化和可迁移性的原理
  • 批准号:
    2312842
  • 财政年份:
    2023
  • 资助金额:
    $ 42.5万
  • 项目类别:
    Standard Grant
Collaborative Research: RI: Medium: Lie group representation learning for vision
协作研究:RI:中:视觉的李群表示学习
  • 批准号:
    2313151
  • 财政年份:
    2023
  • 资助金额:
    $ 42.5万
  • 项目类别:
    Continuing Grant
Collaborative Research: RI: Medium: Principles for Optimization, Generalization, and Transferability via Deep Neural Collapse
合作研究:RI:中:通过深度神经崩溃实现优化、泛化和可迁移性的原理
  • 批准号:
    2312840
  • 财政年份:
    2023
  • 资助金额:
    $ 42.5万
  • 项目类别:
    Standard Grant
Collaborative Research: CompCog: RI: Medium: Understanding human planning through AI-assisted analysis of a massive chess dataset
合作研究:CompCog:RI:中:通过人工智能辅助分析海量国际象棋数据集了解人类规划
  • 批准号:
    2312374
  • 财政年份:
    2023
  • 资助金额:
    $ 42.5万
  • 项目类别:
    Standard Grant
Collaborative Research: CompCog: RI: Medium: Understanding human planning through AI-assisted analysis of a massive chess dataset
合作研究:CompCog:RI:中:通过人工智能辅助分析海量国际象棋数据集了解人类规划
  • 批准号:
    2312373
  • 财政年份:
    2023
  • 资助金额:
    $ 42.5万
  • 项目类别:
    Standard Grant
Collaborative Research: RI: Medium: Lie group representation learning for vision
协作研究:RI:中:视觉的李群表示学习
  • 批准号:
    2313149
  • 财政年份:
    2023
  • 资助金额:
    $ 42.5万
  • 项目类别:
    Continuing Grant
Collaborative Research: RI: Medium: Superhuman Imitation Learning from Heterogeneous Demonstrations
合作研究:RI:媒介:异质演示中的超人模仿学习
  • 批准号:
    2312955
  • 财政年份:
    2023
  • 资助金额:
    $ 42.5万
  • 项目类别:
    Standard Grant
Collaborative Research: RI: Medium: Informed, Fair, Efficient, and Incentive-Aware Group Decision Making
协作研究:RI:媒介:知情、公平、高效和具有激励意识的群体决策
  • 批准号:
    2313137
  • 财政年份:
    2023
  • 资助金额:
    $ 42.5万
  • 项目类别:
    Standard Grant
Collaborative Research: RI: Medium: Lie group representation learning for vision
协作研究:RI:中:视觉的李群表示学习
  • 批准号:
    2313150
  • 财政年份:
    2023
  • 资助金额:
    $ 42.5万
  • 项目类别:
    Continuing Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了