CompCog: HNDS-R: Self-Supervision of Visual Learning From Spatiotemporal Context
CompCog:HNDS-R:时空背景下视觉学习的自我监督
基本信息
- 批准号:2216127
- 负责人:
- 金额:$ 49.7万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2022
- 资助国家:美国
- 起止时间:2022-09-15 至 2025-08-31
- 项目状态:未结题
- 来源:
- 关键词:
项目摘要
Modern computer vision models are trained on sets of images numbering in the billions and yet they are still far less robust than the visual systems of small children who have a much smaller range of visual experiences. This project will use our understanding of how infants experience the world in their first years of life in order to develop new methods of training artificial intelligence programs to decode information they receive from a camera. One advantage that children have over computers is that they experience the visual world as a journey through space, rather than as a series of randomly collected, unrelated images. Children thus have a way to evaluate the similarity of two visual scenes based on the child's vantage point for each scene. The investigators will generate highly realistic scenes modeled on the perspective of a young child moving through a house, which will be used to develop a computer algorithm that learns how to recognize objects, surfaces, and other visual concepts. The work will provide new insights into improving computer vision for real-world problems, a field that is under rapid growth due to its application in areas including household robots, assistive robots, and self-driving cars. The project will support interdisciplinary graduate and postdoctoral training as well as production of widely accessible STEM educational resources through Neuromatch, which is a summer school that emerged during the pandemic as a way to reach students while incurring minimal cost and maintaining a low carbon footprint. The investigators develop a critical theory of visual learning, inspired by how human children learn, with the potential to reshape the fundamentals of learning in computer vision and machine learning. The research hypothesizes that a key ingredient in human visual learning is spatiotemporal contiguity, which is the fact that images in the world are experienced in a sequence as a child moves through space. The project has two components aimed at ultimately developing a new algorithm for visual learning based on human learning. First, a data set will be created using ray-tracing to generate sequences of photorealistic images in a similar way that a child would experience them. Then, these images will be coupled with recent innovations in self-supervised deep learning to determine how spatiotemporal image sequences can augment computer vision using image classification and other tasks as tests. The resulting algorithm will produce artificial neural networks that respond to visual patterns. Those responses can be compared with the responses of neural networks in the human brain as measured through fMRI to determine through representational-similarity analysis if the sequence-learning mechanism is a better approximation of human visual learning than state-of-the-art computer vision methods. Moreover, this analysis technique can be used as a searchlight to highlight the regions in the brain that are most similar to the newly developed artificial neural networks; this is helpful for determining how different brain areas contribute to visual learning. Students supported by this project will conduct research at the interface between psychology and computer science and the project will also contribute to the development of STEM educational resources.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
现代计算机视觉模型是在数十亿张图像集上训练的,但它们仍然远不如视觉体验范围小得多的儿童的视觉系统强大。该项目将利用我们对婴儿在生命最初几年如何体验世界的理解,开发训练人工智能程序的新方法,以解码他们从相机接收的信息。孩子们比电脑有一个优势,那就是他们把视觉世界看作是一次穿越空间的旅行,而不是一系列随机收集的、不相关的图像。因此,儿童有一种方法来评估两个视觉场景的相似性,基于儿童对每个场景的Vantage位置。 研究人员将生成高度逼真的场景,这些场景以一个幼儿穿过房子的视角为模型,将用于开发一种计算机算法,学习如何识别物体、表面和其他视觉概念。这项工作将为改善现实世界问题的计算机视觉提供新的见解,由于其在家用机器人,辅助机器人和自动驾驶汽车等领域的应用,该领域正在快速增长。该项目将支持跨学科的研究生和博士后培训,并通过Neuromatch制作可广泛获取的STEM教育资源,Neuromatch是一个在大流行期间出现的暑期学校,作为接触学生的一种方式,同时产生最小的成本并保持低碳足迹。 研究人员开发了一种视觉学习的批判理论,灵感来自人类儿童的学习方式,有可能重塑计算机视觉和机器学习的学习基础。这项研究假设,人类视觉学习的一个关键因素是时空连续性,这是一个事实,即当孩子在空间中移动时,世界上的图像是按顺序体验的。该项目有两个组成部分,旨在最终开发一种基于人类学习的视觉学习新算法。首先,将使用光线跟踪创建一个数据集,以类似于儿童体验的方式生成逼真的图像序列。然后,这些图像将与自监督深度学习的最新创新相结合,以确定时空图像序列如何使用图像分类和其他任务作为测试来增强计算机视觉。由此产生的算法将产生对视觉模式做出响应的人工神经网络。这些反应可以与通过fMRI测量的人脑中神经网络的反应进行比较,以通过代表性相似性分析来确定序列学习机制是否比最先进的计算机视觉方法更好地近似人类视觉学习。此外,这种分析技术可以用作探照灯,突出显示大脑中与新开发的人工神经网络最相似的区域;这有助于确定不同的大脑区域如何促进视觉学习。 该项目支持的学生将在心理学和计算机科学之间进行研究,该项目还将有助于开发STEM教育资源。该奖项反映了NSF的法定使命,并通过使用基金会的智力价值和更广泛的影响审查标准进行评估,被认为值得支持。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Bradley Wyble其他文献
Bradley Wyble的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Bradley Wyble', 18)}}的其他基金
CompCog: Bridging the gap between behavioral and neural correlates of attention using a computational model of neural mechanisms
CompCog:使用神经机制的计算模型弥合注意力的行为和神经相关性之间的差距
- 批准号:
1734220 - 财政年份:2017
- 资助金额:
$ 49.7万 - 项目类别:
Standard Grant
Integrating Spatial and Temporal Models of Visual Attention
整合视觉注意力的时空模型
- 批准号:
1331073 - 财政年份:2013
- 资助金额:
$ 49.7万 - 项目类别:
Standard Grant
相似海外基金
Collaborative Research: HNDS-I: NewsScribe - Extending and Enhancing the Media Cloud Searchable Global Online News Archive
合作研究:HNDS-I:NewsScribe - 扩展和增强媒体云可搜索全球在线新闻档案
- 批准号:
2341858 - 财政年份:2024
- 资助金额:
$ 49.7万 - 项目类别:
Standard Grant
Collaborative Research: HNDS-I: NewsScribe - Extending and Enhancing the Media Cloud Searchable Global Online News Archive
合作研究:HNDS-I:NewsScribe - 扩展和增强媒体云可搜索全球在线新闻档案
- 批准号:
2341859 - 财政年份:2024
- 资助金额:
$ 49.7万 - 项目类别:
Standard Grant
Collaborative Research: HNDS-I. Mobility Data for Communities (MD4C): Uncovering Segregation, Climate Resilience, and Economic Development from Cell-Phone Records
合作研究:HNDS-I。
- 批准号:
2420945 - 财政年份:2024
- 资助金额:
$ 49.7万 - 项目类别:
Standard Grant
Collaborative Research: HNDS-R Networks and Health Disparities in Delays in Diagnosis of Medical Conditions with Ambiguous Symptoms
合作研究:HNDS-R 网络和症状不明确的医疗状况诊断延迟造成的健康差异
- 批准号:
2241537 - 财政年份:2023
- 资助金额:
$ 49.7万 - 项目类别:
Standard Grant
Collaborative Research: SOS-DCI / HNDS-R: Advancing Semantic Network Analysis to Better Understand How Evaluative Exchanges Shape Scientific Arguments
合作研究:SOS-DCI / HNDS-R:推进语义网络分析,以更好地理解评估性交流如何塑造科学论证
- 批准号:
2244805 - 财政年份:2023
- 资助金额:
$ 49.7万 - 项目类别:
Standard Grant
Collaborative Research: HNDS-I: Cyberinfrastructure for Human Dynamics and Resilience Research
合作研究:HNDS-I:人类动力学和复原力研究的网络基础设施
- 批准号:
2318203 - 财政年份:2023
- 资助金额:
$ 49.7万 - 项目类别:
Standard Grant
Collaborative Research: HNDS-R: Human Networks, Sustainable Development, and Lived Experience in a Nonindustrial Society
合作研究:HNDS-R:人类网络、可持续发展和非工业社会的生活经验
- 批准号:
2212898 - 财政年份:2023
- 资助金额:
$ 49.7万 - 项目类别:
Standard Grant
Collaborative Research: HNDS-R: Polarization, Information Integrity, and Diffusion
合作研究:HNDS-R:极化、信息完整性和扩散
- 批准号:
2242072 - 财政年份:2023
- 资助金额:
$ 49.7万 - 项目类别:
Standard Grant
HNDS-R - Collaborative Research: An Integrated Analysis of the COVID-19 Crisis on Labor Market Outcomes and Mortality
HNDS-R - 协作研究:COVID-19 危机对劳动力市场结果和死亡率的综合分析
- 批准号:
2242472 - 财政年份:2023
- 资助金额:
$ 49.7万 - 项目类别:
Standard Grant
HNDS-R - Collaborative Research: An Integrated Analysis of the COVID-19 Crisis on Labor Market Outcomes and Mortality
HNDS-R - 协作研究:COVID-19 危机对劳动力市场结果和死亡率的综合分析
- 批准号:
2242581 - 财政年份:2023
- 资助金额:
$ 49.7万 - 项目类别:
Standard Grant