NRI: FND: Self-supervised Object Discovery, Detection and Visual Object Search
NRI:FND:自监督对象发现、检测和视觉对象搜索
基本信息
- 批准号:1925231
- 负责人:
- 金额:$ 50万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2019
- 资助国家:美国
- 起止时间:2019-09-01 至 2023-08-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
The ubiquitous deployment of service robots in homes and service environments rests on the ability to detect and recognize objects of interest and navigate towards them. In the past few years, largely enabled by machine-learning approaches, there has seen tremendous progress by the computer vision community. The standard datasets for training and evaluation, however, typically consist of static images curated from the internet and requiring extensive manual annotation. While this paradigm is effective for learning commonly encountered object categories, it does not generalize to possibly thousands of objects of interest in service robotics applications. The development of learning algorithms which do not require supervision through detailed human annotations is one of the central problems in computer vision and artificial intelligence. The open problems in this area are motivated by our understanding how humans and biological systems acquire new knowledge about visual content in the environments. This project will lead to a new class of algorithms for object discovery, object detection, 3-D environment modeling, and navigation. The research will support a cohort of diverse graduate and undergraduate students at George Mason University and will further advance the active vision benchmark dataset for evaluating the development and deployment of service robots.Technical aims of the project focus on the development of methods for learning representations of objects which are specific to the context where the robot operates, can be learned in self-supervised manner without need for laborious annotations, and are reusable for multiple tasks. This research utilizes the camera motion as a form of self-supervision for learning the new multi-view object embeddings, followed by zero-shot or few-shot detection training of powerful object detector models with little or no labelling effort. The inherent limitations of object detection will be tackled in the robotic setting by semantic target driven navigation techniques, learned in a reinforcement learning framework on top of representations and architectures developed for object detection. These policies will constitute a basic set of visually guided navigation skills of the robotic agent and will be integrated with mapping and exploration strategies. The approaches will be motivated by the current challenges of embodied agents' perception in indoors scenes, but the solutions will be broadly applicable in settings which require the long-term on-going interactions of an agent with dynamically changing environments.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
服务机器人在家庭和服务环境中的无处不在的部署取决于检测和识别感兴趣的对象并向它们导航的能力。在过去的几年里,在机器学习方法的支持下,计算机视觉社区取得了巨大的进步。然而,用于训练和评估的标准数据集通常由来自互联网的静态图像组成,需要大量的手动注释。虽然这种范例对于学习常见的对象类别是有效的,但它不能推广到服务机器人应用中可能感兴趣的数千个对象。开发不需要通过详细的人类注释进行监督的学习算法是计算机视觉和人工智能的核心问题之一。这一领域的开放性问题的动机是我们理解人类和生物系统如何获得有关环境中视觉内容的新知识。这个项目将导致一类新的算法的对象发现,对象检测,三维环境建模和导航。该研究将支持乔治梅森大学的一群不同的研究生和本科生,并将进一步推进主动视觉基准数据集,用于评估服务机器人的开发和部署。该项目的技术目标侧重于开发用于学习对象表示的方法,这些对象表示特定于机器人操作的环境,可以以自我监督的方式学习,而不需要费力的注释,并且可重复用于多个任务。这项研究利用相机运动作为一种自我监督的形式,用于学习新的多视图对象嵌入,然后对功能强大的对象检测器模型进行零拍摄或少量拍摄检测训练,很少或根本没有标记工作。物体检测的固有局限性将在机器人环境中通过语义目标驱动的导航技术来解决,这些技术是在为物体检测开发的表示和架构之上的强化学习框架中学习的。这些政策将构成一套基本的视觉导航技能的机器人代理,并将与映射和探索策略相结合。这些方法将受到当前在室内场景中体现代理感知的挑战的激励,但这些解决方案将广泛适用于需要代理与动态变化的环境进行长期持续互动的环境。该奖项反映了NSF的法定使命,并被认为值得通过使用基金会的智力价值和更广泛的影响审查标准进行评估来支持。
项目成果
期刊论文数量(4)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Learning-Augmented Model-Based Planning for Visual Exploration
基于学习增强模型的视觉探索规划
- DOI:
- 发表时间:2023
- 期刊:
- 影响因子:0
- 作者:Yimeng Li;Arnab Debnath;Gregory J. Stein;Jana Košecká
- 通讯作者:Jana Košecká
Object Pose Estimation using Mid-level Visual Representations
- DOI:10.1109/iros47612.2022.9981452
- 发表时间:2022-03
- 期刊:
- 影响因子:0
- 作者:Negar Nejatishahidin;Pooya Fayyazsanavi;J. Kosecka
- 通讯作者:Negar Nejatishahidin;Pooya Fayyazsanavi;J. Kosecka
Learning View and Target Invariant Visual Servoing for Navigation
学习用于导航的视图和目标不变视觉伺服
- DOI:
- 发表时间:2020
- 期刊:
- 影响因子:0
- 作者:Li, Yimeng;Kosecka, Jana
- 通讯作者:Kosecka, Jana
Uncertainty Aware Proposal Segmentation for Unknown Object Detection
- DOI:10.1109/wacvw54805.2022.00030
- 发表时间:2021-11
- 期刊:
- 影响因子:0
- 作者:Yimeng Li;J. Kosecka
- 通讯作者:Yimeng Li;J. Kosecka
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Jana Kosecka其他文献
Rank Conditions on the Multiple-View Matrix
多视图矩阵上的排名条件
- DOI:
10.1023/b:visi.0000022286.53224.3d - 发表时间:
2004 - 期刊:
- 影响因子:19.5
- 作者:
Yi Ma;Kun Huang;René Vidal;Jana Kosecka;S. Sastry - 通讯作者:
S. Sastry
Jana Kosecka的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Jana Kosecka', 18)}}的其他基金
NRI: Collaborative Research: Task Dependent Semantic Modeling for Robot Perception
NRI:协作研究:机器人感知的任务相关语义建模
- 批准号:
1527208 - 财政年份:2015
- 资助金额:
$ 50万 - 项目类别:
Standard Grant
CAREER: Geometric and Appearance Based Methods for Model Acquisition
职业:基于几何和外观的模型获取方法
- 批准号:
0347774 - 财政年份:2004
- 资助金额:
$ 50万 - 项目类别:
Continuing Grant
相似国自然基金
Novosphingobium sp. FND-3降解呋喃丹的分子机制研究
- 批准号:31670112
- 批准年份:2016
- 资助金额:62.0 万元
- 项目类别:面上项目
相似海外基金
Movement perception in Functional Neurological Disorder (FND)
功能性神经疾病 (FND) 的运动感知
- 批准号:
MR/Y004000/1 - 财政年份:2024
- 资助金额:
$ 50万 - 项目类别:
Research Grant
NRI: FND: Collaborative Research: DeepSoRo: High-dimensional Proprioceptive and Tactile Sensing and Modeling for Soft Grippers
NRI:FND:合作研究:DeepSoRo:软抓手的高维本体感受和触觉感知与建模
- 批准号:
2348839 - 财政年份:2023
- 资助金额:
$ 50万 - 项目类别:
Standard Grant
S&AS: FND: COLLAB: Planning and Control of Heterogeneous Robot Teams for Ocean Monitoring
S
- 批准号:
2311967 - 财政年份:2022
- 资助金额:
$ 50万 - 项目类别:
Standard Grant
NRI: FND: Collaborative Research: DeepSoRo: High-dimensional Proprioceptive and Tactile Sensing and Modeling for Soft Grippers
NRI:FND:合作研究:DeepSoRo:软抓手的高维本体感受和触觉感知与建模
- 批准号:
2024882 - 财政年份:2021
- 资助金额:
$ 50万 - 项目类别:
Standard Grant
NRI: FND: Collaborative Research: DeepSoRo: High-dimensional Proprioceptive and Tactile Sensing and Modeling for Soft Grippers
NRI:FND:合作研究:DeepSoRo:软抓手的高维本体感受和触觉感知与建模
- 批准号:
2024646 - 财政年份:2021
- 资助金额:
$ 50万 - 项目类别:
Standard Grant
NRI: FND: Foundations for Physical Co-Manipulation with Mixed Teams of Humans and Soft Robots
NRI:FND:人类和软机器人混合团队物理协同操作的基础
- 批准号:
2024792 - 财政年份:2021
- 资助金额:
$ 50万 - 项目类别:
Standard Grant
NRI: FND: Foundations for Physical Co-Manipulation with Mixed Teams of Humans and Soft Robots
NRI:FND:人类和软机器人混合团队物理协同操作的基础
- 批准号:
2024670 - 财政年份:2021
- 资助金额:
$ 50万 - 项目类别:
Standard Grant
NRI: FND: Natural Power Transmission through Unconstrained Fluids for Robotic Manipulation
NRI:FND:通过不受约束的流体进行自然动力传输,用于机器人操作
- 批准号:
2024409 - 财政年份:2020
- 资助金额:
$ 50万 - 项目类别:
Standard Grant
NRI: FND: Multi-Manipulator Extensible Robotic Platforms
NRI:FND:多机械手可扩展机器人平台
- 批准号:
2024435 - 财政年份:2020
- 资助金额:
$ 50万 - 项目类别:
Standard Grant
Collaborative Research: NRI: FND: Flying Swarm for Safe Human Interaction in Unstructured Environments
合作研究:NRI:FND:用于非结构化环境中安全人类互动的飞群
- 批准号:
2024615 - 财政年份:2020
- 资助金额:
$ 50万 - 项目类别:
Standard Grant