CAREER: Cameras and Algorithms that turn Rays Efficiently into Everyday Reconstructions
职业:将光线有效地转化为日常重建的相机和算法
基本信息
- 批准号:2144956
- 负责人:
- 金额:$ 49.5万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Continuing Grant
- 财政年份:2022
- 资助国家:美国
- 起止时间:2022-06-01 至 2027-05-31
- 项目状态:未结题
- 来源:
- 关键词:
项目摘要
This award is funded in part under the American Rescue Plan Act of 2021 (Public Law 117-2).This project aims to enable people to capture digital versions of real-world scenes more accurately, efficiently, and flexibly than is currently possible such that it can be a simple everyday task. In computer vision, capture of the real world is called reconstruction, and it is a core challenge that requires estimating the 3D shape, motion, object materials, and lighting within a scene. Successful reconstruction can provide spatial and geometric information of objects independent from lighting conditions to intelligent systems so that they can use the information for reasoning or creating applications of these objects from different viewing angles, e.g., in virtual and augmented reality. This project will scientifically investigate how to overcome the challenges of reconstruction by combining signals from different types of cameras in a way that is consistent with the physics of image formation. The project will integrate research and education by creating new interdisciplinary courses and promoting diversity from different outreach activities, e.g., supporting our K-12 AI4ALL local diversity effort, and attending inclusive teaching workshops at Brown’s Sheridan Center for Teaching and Learning. To help overcome the ill-posed problem of scene reconstruction from passive RGB cameras, this project has three areas of focus: 1) Investigate new camera systems that integrate multiple kinds of signals via physically based image formation models. Existing platforms handle typically one modality and frequency (visible light), but the project aims to combine visible light, time of flight, and event cameras to balance the negative effects of each camera and produce a signal of a quality that no individual camera could produce: high spatio-temporal resolution 3D video. 2) Investigate lighting and material decomposition via better capture, sampling, and reconstruction from heterogeneous omnidirectional cameras via new fast view synthesis methods adapted to represent incident illumination. This will use learned material priors from factorizations of physically based reflectance models that can exploit captured full and partial omnidirectional samples. 3) Investigate hybrid representations, optimization, and machine learning methods, including initialization based on reliable sparse sampling from depth sensors, via physically based self-supervised transforms to constrain optimization, and via residual error channels to allow the model to explain all that it can in a physically meaningful way and still train on real-world data.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
该奖项部分由2021年美国救援计划法案(公法117-2)资助。该项目旨在使人们能够比目前更准确,更有效,更灵活地捕捉现实世界场景的数字版本,以便它可以成为一个简单的日常任务。在计算机视觉中,捕获真实的世界被称为重建,这是一个核心挑战,需要估计场景中的3D形状,运动,物体材料和照明。成功的重建可以向智能系统提供独立于照明条件的对象的空间和几何信息,使得它们可以使用该信息来从不同的视角推理或创建这些对象的应用,例如,在虚拟和增强现实中。该项目将科学地研究如何通过以符合图像形成物理学的方式组合来自不同类型相机的信号来克服重建的挑战。该项目将通过开设新的跨学科课程和促进不同外联活动的多样性,支持我们的K-12 AI 4ALL本地多样性的努力,并参加在布朗的谢里丹教学和学习中心的包容性教学研讨会。 为了帮助克服被动RGB相机场景重建的不适定问题,该项目有三个重点领域:1)研究通过基于物理的图像形成模型集成多种信号的新相机系统。现有的平台通常处理一种模态和频率(可见光),但该项目旨在将联合收割机结合可见光,飞行时间和事件相机,以平衡每个相机的负面影响,并产生单个相机无法产生的质量信号:高时空分辨率3D视频。2)研究照明和材料分解通过更好的捕获,采样,并通过新的快速视图合成方法,适用于表示入射照明从异构全向相机重建。这将使用从基于物理的反射率模型的因子分解中学习的材料先验,该模型可以利用捕获的全部和部分全向样本。3)研究混合表示,优化和机器学习方法,包括基于深度传感器的可靠稀疏采样的初始化,通过基于物理的自监督变换来约束优化,并且通过残差通道,以允许模型以物理上有意义的方式解释它所能解释的一切,并且仍然在真实的上训练。该奖项反映了NSF的法定使命,并通过使用基金会的知识价值和更广泛的影响审查标准进行评估,被认为值得支持。
项目成果
期刊论文数量(1)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Differentiable Appearance Acquisition from a Flash/No-flash RGB-D Pair
- DOI:10.1109/iccp54855.2022.9887646
- 发表时间:2022-08
- 期刊:
- 影响因子:0
- 作者:Hyunjin Ku;Hyunho Hat;J. Lee;Dahyun Kang;J. Tompkin;Min H. Kim
- 通讯作者:Hyunjin Ku;Hyunho Hat;J. Lee;Dahyun Kang;J. Tompkin;Min H. Kim
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
James Tompkin其他文献
OmniSDF: Scene Reconstruction using Omnidirectional Signed Distance Functions and Adaptive Binoctrees
OmniSDF:使用全向有符号距离函数和自适应二叉树进行场景重建
- DOI:
- 发表时间:
2024 - 期刊:
- 影响因子:0
- 作者:
Hak;Andreas Meuleman;Hyeonjoong Jang;James Tompkin;Min H. Kim - 通讯作者:
Min H. Kim
GHOST in the Robot: Virtual Reality Teleoperation for Mobile Manipulation
机器人中的幽灵:用于移动操纵的虚拟现实远程操作
- DOI:
- 发表时间:
- 期刊:
- 影响因子:0
- 作者:
Calvin Bauer;Janeth Meraz;Are Oelsner;James Tompkin;Stefanie Tellex - 通讯作者:
Stefanie Tellex
Video-based Characters – Creating New Human Performances from a Multi-view Video Database
- DOI:
- 发表时间:
2011 - 期刊:
- 影响因子:
- 作者:
Feng Xu;Yebin Liu;Carsten Stoll;James Tompkin;Gaurav Bharaj;Qionghai Dai;Hans-Peter Seidel;Jan Kautz;Christian Theobalt; - 通讯作者:
Semantic Attention Flow Fields for Dynamic Scene Decomposition
动态场景分解的语义注意力流场
- DOI:
- 发表时间:
2023 - 期刊:
- 影响因子:0
- 作者:
Yiqing Liang;Eliot Laidlaw;Alexander Meyerowitz;Srinath Sridhar;James Tompkin - 通讯作者:
James Tompkin
Are Multi-view Edges Incomplete for Depth Estimation?
- DOI:
10.1007/s11263-023-01890-y - 发表时间:
2024-02-12 - 期刊:
- 影响因子:9.300
- 作者:
Numair Khan;Min H. Kim;James Tompkin - 通讯作者:
James Tompkin
James Tompkin的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('James Tompkin', 18)}}的其他基金
III: Medium: Collaborative Research: Situated Visual Information Spaces
III:媒介:协作研究:情境视觉信息空间
- 批准号:
2107409 - 财政年份:2021
- 资助金额:
$ 49.5万 - 项目类别:
Continuing Grant
相似海外基金
Windows for the Small-Sized Telescope (SST) Cameras of the Cherenkov Telescope Array (CTA)
切伦科夫望远镜阵列 (CTA) 小型望远镜 (SST) 相机的窗口
- 批准号:
ST/Z000017/1 - 财政年份:2024
- 资助金额:
$ 49.5万 - 项目类别:
Research Grant
PFI-TT: Broadening Real-Time Continuous Traffic Analysis on the Roadside using AI-Powered Smart Cameras
PFI-TT:使用人工智能驱动的智能摄像头扩大路边实时连续交通分析
- 批准号:
2329780 - 财政年份:2023
- 资助金额:
$ 49.5万 - 项目类别:
Continuing Grant
Collaborative Research: Uncovering the Effects of Body-Worn Cameras on Officer and Community Outcomes
合作研究:揭示随身摄像头对警官和社区结果的影响
- 批准号:
2317448 - 财政年份:2023
- 资助金额:
$ 49.5万 - 项目类别:
Standard Grant
Full-field sensing of small- and medium-span bridges using 4K high-speed cameras and development of a database for infrastructure maintenance management
使用4K高速摄像机对中小跨桥梁进行全场传感并开发基础设施维护管理数据库
- 批准号:
23H01483 - 财政年份:2023
- 资助金额:
$ 49.5万 - 项目类别:
Grant-in-Aid for Scientific Research (B)
Collaborative Research: Uncovering the Effects of Body-Worn Cameras on Officer and Community Outcomes
合作研究:揭示随身摄像头对警官和社区结果的影响
- 批准号:
2317449 - 财政年份:2023
- 资助金额:
$ 49.5万 - 项目类别:
Standard Grant
Redefining the future of electromagnetic sensing: portable single-pixel millimeter-wave cameras operating in real-time
重新定义电磁传感的未来:实时运行的便携式单像素毫米波相机
- 批准号:
EP/X022943/1 - 财政年份:2023
- 资助金额:
$ 49.5万 - 项目类别:
Fellowship
CCSS: FLASH: Drone Obstacle Avoidance with Event Cameras: Bio-Inspired Architecture, Algorithm, and Platform
CCSS:FLASH:使用事件相机进行无人机避障:仿生架构、算法和平台
- 批准号:
2302724 - 财政年份:2023
- 资助金额:
$ 49.5万 - 项目类别:
Standard Grant
Collaborative Research: Enabling Intelligent Cameras in Internet-of-Things via a Holistic Platform, Algorithm, and Hardware Co-design
协作研究:通过整体平台、算法和硬件协同设计实现物联网中的智能相机
- 批准号:
2346091 - 财政年份:2023
- 资助金额:
$ 49.5万 - 项目类别:
Standard Grant
Hybrid Pixel Detectors for Next-Generation Diffraction Cameras in Scanning Electron Microscopy
用于扫描电子显微镜中下一代衍射相机的混合像素探测器
- 批准号:
2888389 - 财政年份:2023
- 资助金额:
$ 49.5万 - 项目类别:
Studentship
Diagnosis and patient education tool for osteoporosis using smartphone cameras and AI technology
使用智能手机摄像头和人工智能技术的骨质疏松症诊断和患者教育工具
- 批准号:
23K15682 - 财政年份:2023
- 资助金额:
$ 49.5万 - 项目类别:
Grant-in-Aid for Early-Career Scientists