Reflexive robotics using asynchronous perception

使用异步感知的反射机器人

基本信息

  • 批准号:
    EP/S035761/1
  • 负责人:
  • 金额:
    $ 48.78万
  • 依托单位:
  • 依托单位国家:
    英国
  • 项目类别:
    Research Grant
  • 财政年份:
    2020
  • 资助国家:
    英国
  • 起止时间:
    2020 至 无数据
  • 项目状态:
    已结题

项目摘要

This project will develop a fundamentally different approach to visual perception & autonomy where the concept of an image itself is replaced with a stream of independently firing pixels, similar to unsynchronised biological cells in the retina. Recent advances in computer vision & machine learning have enabled robots which can perceive, understand, and interact intelligently with, their environments. However, this "interpretive" behaviour is just one of the fundamental models of autonomy found in nature. The techniques developed in this project will exploit recent breakthroughs in instantaneous, non-image-based, visual sensing, to enable entirely new types of autonomous system. The corresponding step-change in robotic capabilities will impact the manufacturing, space, autonomous vehicles and medical sectors.If we perceive an object approaching at high speed, we instinctively try to avoid the object without taking the time to interpret the scene. It is not important to understand what the object is or why it's approaching us. This "reflexive" behavioural model is vital to react to time-critical events. In such cases, the situation has often already been resolved by the time we become consciously aware of it. Reflexive behaviour is also a vital component of continuous control problems. We are reluctant to take our eyes off the road while driving, as we know that we will rapidly begin to veer off course without a constant cycle of perception and correction. We also find it far easier to pick up and manipulate objects while looking at them, rather than relying entirely on tactile sensing. Unfortunately, visual sensing hardware requires enormous bandwidth. Megapixel cameras produce millions of bytes per frame. Thus, the temporal sampling rate is low, reaction times are high, and reflexive adjustments based on visual data become impractical.We finally have the opportunity to overturn the paradigm of vision being impractical for low-latency problems, and to facilitate a step change in robotic capabilities, thanks to recent advances in visual sensor technology. Asynchronous visual sensors (also known as event cameras) eschew regular sensor wide updates (i.e. images). Instead, every pixel independently and asynchronously transmits a packet of information, as soon as it detects an intensity change from its previous transmission. This drastically reduces data bandwidth by avoiding the redundant transmission of unchanged pixels. More importantly, because these packets are transmitted immediately, the sensor typically provides a latency reduction of 3 orders of magnitude (30ms to 30us) between an event occurring and it being perceived.This advancement in visual sensing is dramatic, but we are desperately in need of a commensurate revolution in robotic perception research. Without the concepts of the image or synchronous sampling, decades of computer vision and machine learning research is rendered unusable with these sensors. This project will provide the theoretical foundations for the robot perception revolution, by developing novel asynchronous paradigms for both perception and understanding. Mirroring biological systems, this will comprise a hierarchical perception framework encompassing both low-level reflexes and high-level understanding, in a manner reminiscent of modern deep-learning. However, unlike deep-learning, pixel-update events will occur asynchronously and will propagate independently through the system, hence maintaining extremely low latency.The sensor technology is still in its early trial phase, and few researchers are exploring its implications for perception. No group, nationally or internationally, is currently making a concerted effort in this area. Hence, this project not only lays the groundwork for a plethora of new biologically-inspired "reflexive robotics" applications. It will also support the development of a unique new research team, placing the UK at the forefront of this exciting field.
该项目将开发一种从根本上不同的视觉感知和自主性方法,其中图像本身的概念被独立发射的像素流所取代,类似于视网膜中的非同步生物细胞。计算机视觉和机器学习的最新进展使机器人能够感知,理解并智能地与环境交互。然而,这种“解释性”行为只是自然界中发现的自主性的基本模型之一。该项目开发的技术将利用瞬时、非图像视觉传感的最新突破,实现全新类型的自主系统。机器人能力的相应阶跃变化将影响制造业、太空、自动驾驶汽车和医疗部门。如果我们感知到物体高速接近,我们会本能地试图避开物体,而不会花时间去解释场景。了解物体是什么或为什么它接近我们并不重要。这种“反射性”行为模式对于应对时间紧迫的事件至关重要。在这种情况下,当我们有意识地意识到这一点时,情况往往已经解决了。自反行为也是持续控制问题的重要组成部分。开车时,我们不愿意把视线从道路上移开,因为我们知道,如果没有一个持续的感知和纠正循环,我们将迅速开始偏离轨道。我们还发现,在看着物体的同时拿起和操纵它们要容易得多,而不是完全依赖于触觉。不幸的是,视觉传感硬件需要巨大的带宽。百万像素摄像头每帧产生数百万字节。因此,时间采样率低,反应时间长,和反射调整的基础上视觉数据变得practiced.We终于有机会推翻范式的视觉是不切实际的低延迟的问题,并促进了一步的变化,在机器人的能力,由于视觉传感器技术的最新进展。异步视觉传感器(也称为事件相机)避免定期的传感器范围更新(即图像)。相反,每个像素独立地和异步地传输一个信息包,只要它检测到强度变化,从它以前的传输。这通过避免未改变像素的冗余传输来大幅降低数据带宽。更重要的是,由于这些数据包是立即传输的,传感器通常可以将事件发生和感知之间的延迟减少3个数量级(30 ms至30 us)。视觉传感的这一进步是巨大的,但我们迫切需要机器人感知研究的相应革命。如果没有图像或同步采样的概念,数十年的计算机视觉和机器学习研究将无法使用这些传感器。该项目将为机器人感知革命提供理论基础,为感知和理解开发新的异步范例。这是一个奇妙的生物系统,它将包括一个分层的感知框架,包括低级反射和高级理解,让人想起现代深度学习。然而,与深度学习不同的是,像素更新事件将异步发生,并在系统中独立传播,因此保持极低的延迟。传感器技术仍处于早期试验阶段,很少有研究人员探索其对感知的影响。目前没有任何国家或国际团体在这一领域作出协调一致的努力。因此,该项目不仅为大量新的生物启发的“反射机器人”应用奠定了基础。它还将支持一支独特的新研究团队的发展,使英国处于这一令人兴奋的领域的最前沿。

项目成果

期刊论文数量(10)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
DeFeat-Net: General Monocular Depth via Simultaneous Unsupervised Representation Learning
Learning generic deep feature representations
学习通用深度特征表示
  • DOI:
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Spencer Martin Jaime
  • 通讯作者:
    Spencer Martin Jaime
RaSpectLoc: RAman SPECTroscopy-dependent robot LOCalisation
RaSpectLoc:依赖于 RAman 光谱的机器人定位
  • DOI:
    10.1109/iros55552.2023.10342198
  • 发表时间:
    2023
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Thirgood C
  • 通讯作者:
    Thirgood C
ASL-SLAM: An Asynchronous Formulation of Lines for SLAM with Event Sensors
ASL-SLAM:带有事件传感器的 SLAM 的异步线路公式
  • DOI:
    10.1145/3523132.3523146
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Nong X
  • 通讯作者:
    Nong X
The Monocular Depth Estimation Challenge
  • DOI:
    10.1109/wacvw58289.2023.00069
  • 发表时间:
    2022-11
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Jaime Spencer;C. Qian;Chris Russell;Simon Hadfield;E. Graf;W. Adams;A. Schofield;J. Elder;R. Bowden;Heng Cong;S. Mattoccia;Matteo Poggi;Zeeshan Khan Suri;Yang Tang;Fabio Tosi;Hao Wang;Youming Zhang;Yusheng Zhang;Chaoqiang Zhao
  • 通讯作者:
    Jaime Spencer;C. Qian;Chris Russell;Simon Hadfield;E. Graf;W. Adams;A. Schofield;J. Elder;R. Bowden;Heng Cong;S. Mattoccia;Matteo Poggi;Zeeshan Khan Suri;Yang Tang;Fabio Tosi;Hao Wang;Youming Zhang;Yusheng Zhang;Chaoqiang Zhao
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Simon Hadfield其他文献

TACTIC: Joint Rate-Distortion-Accuracy Optimisation for Low Bitrate Compression
战术:低比特率压缩的联合率-失真-精度优化
  • DOI:
  • 发表时间:
    2021
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Nikolina Kubiak;Simon Hadfield
  • 通讯作者:
    Simon Hadfield
Prototype for multidisciplinary research in the context of the Internet of Things
物联网背景下的多学科研究原型
  • DOI:
    10.1016/j.jnca.2016.11.023
  • 发表时间:
    2017
  • 期刊:
  • 影响因子:
    0
  • 作者:
    M. López;T. Drysdale;Simon Hadfield;M. I. Maricar
  • 通讯作者:
    M. I. Maricar
From Vision to Grasping: Adapting Visual Networks
从视觉到抓取:适应视觉网络
  • DOI:
    10.1007/978-3-319-64107-2_38
  • 发表时间:
    2017
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Rebecca Allday;Simon Hadfield;R. Bowden
  • 通讯作者:
    R. Bowden
Direct-from-Video: Unsupervised NRSfM
直接视频:无监督 NRSfM
  • DOI:
    10.1007/978-3-319-49409-8_50
  • 发表时间:
    2016
  • 期刊:
  • 影响因子:
    0
  • 作者:
    K. Lebeda;Simon Hadfield;R. Bowden
  • 通讯作者:
    R. Bowden
What Did You Think Would Happen? Explaining Agent Behaviour Through Intended Outcomes
您认为会发生什么?

Simon Hadfield的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

相似海外基金

Promoting Functional Neck Motion in Patients with Cerebral Palsy using a Robotic Neck Brace
使用机器人颈托促进脑瘫患者的颈部功能性运动
  • 批准号:
    10742373
  • 财政年份:
    2023
  • 资助金额:
    $ 48.78万
  • 项目类别:
Lighting up zooplankton - mapping marine light using robotics
照亮浮游动物 - 使用机器人绘制海洋光
  • 批准号:
    2859995
  • 财政年份:
    2023
  • 资助金额:
    $ 48.78万
  • 项目类别:
    Studentship
Automation of Tenanted Arches NDT Inspections using Robotics and Machine Learning
使用机器人和机器学习实现租户拱门无损检测自动化
  • 批准号:
    10089475
  • 财政年份:
    2023
  • 资助金额:
    $ 48.78万
  • 项目类别:
    Collaborative R&D
Enhancing robotic head and neck surgical skills using stimulated simulation
使用刺激模拟增强机器人头颈手术技能
  • 批准号:
    10586874
  • 财政年份:
    2023
  • 资助金额:
    $ 48.78万
  • 项目类别:
Development and validation of a high-fidelity gynecologic training platform for robotic-assisted surgery using 3D printing technology
使用 3D 打印技术开发和验证用于机器人辅助手术的高保真妇科培训平台
  • 批准号:
    10821242
  • 财政年份:
    2023
  • 资助金额:
    $ 48.78万
  • 项目类别:
Antibiotic discovery using a DBTL plug-and-play robotics pipeline
使用 DBTL 即插即用机器人管道发现抗生素
  • 批准号:
    2898886
  • 财政年份:
    2023
  • 资助金额:
    $ 48.78万
  • 项目类别:
    Studentship
Music therapy experiment system using robot for effective emotional intelligence training of ASD children
利用机器人进行音乐治疗实验系统对自闭症儿童进行有效的情商训练
  • 批准号:
    23K12755
  • 财政年份:
    2023
  • 资助金额:
    $ 48.78万
  • 项目类别:
    Grant-in-Aid for Early-Career Scientists
Mapping ankle-foot stiffness to socket comfort and pressure using a robotic emulator platform to personalize prosthesis function via human-in-the-loop optimization
使用机器人仿真器平台将踝足硬度映射到插座舒适度和压力,通过人机交互优化来个性化假肢功能
  • 批准号:
    10584383
  • 财政年份:
    2023
  • 资助金额:
    $ 48.78万
  • 项目类别:
Toward Restoration of Normative Postural Control and Stability using Neural Control of Powered Prosthetic Ankles
使用动力假肢踝关节的神经控制恢复规范的姿势控制和稳定性
  • 批准号:
    10745442
  • 财政年份:
    2023
  • 资助金额:
    $ 48.78万
  • 项目类别:
Feasibility of Using Maestro Hand Exoskeleton in Post-stroke Hand Rehabilitation to Improve Joint Coordination
使用 Maestro 手部外骨骼进行中风后手部康复以提高关节协调性的可行性
  • 批准号:
    10515326
  • 财政年份:
    2022
  • 资助金额:
    $ 48.78万
  • 项目类别:
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了