CompCog: Human Scene Processing Characterized by Computationally-derived Scene Primitives
CompCog:以计算派生场景基元为特征的人类场景处理
基本信息
- 批准号:1439237
- 负责人:
- 金额:$ 46.32万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2014
- 资助国家:美国
- 起止时间:2014-09-01 至 2019-02-28
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
How do our brains take the light entering our eyes and turn it into our experience of the world around us? Critically, this experience seems to involve a visual "vocabulary" that allows us to understand new scenes based on our prior knowledge. The investigators explore the nature of this visual language, exploring the specific computations that are realized in the brain mechanisms used for scene perception. The work combines data from state-of-the-art computer vision systems with human neuroimaging to both predict brain responses when viewing complex, real-world scenes, and to analyze and understand the hidden structure embedded in real-world images. This effort is essential for building a theory of how we are able to see and for improving machine vision systems. More broadly, biologically-inspired models of vision are essential for the effective deployment of intelligent technology in navigation systems, assistive devices, security verification, and visual information retrieval.The artificial vision system adopted in this research is highly data-driven in that it is learning about the visual world by continuously "looking at" real-world images on the World Wide Web. The model, known as "NEIL" (Never Ending Image Learner, http://www.neil-kb.com/), leverages cutting-edge big-data methods to extract a vocabulary of scene parts and relationships from hundreds of thousands of images. The relevance of this vocabulary to human vision will then be tested using both functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) neuroimaging. The hypothesis is that the application of prior knowledge about scenes expresses itself through learned associations between the specific parts and relations forming the vocabulary for scene perception. Moreover, different kinds of associations may be instantiated within distinct components of the functional brain network responsible for scene perception. Overall, this research will build on a recent, highly-successful artificial vision system in order to provide a more well-specified theory of the parts and relations underlying human scene perception. At the same time, the research will provide information about the human functional relevance of computationally-derived scene parts and relations, thereby helping to refine and improve artificial vision systems.
我们的大脑如何将进入眼睛的光线转化为我们对周围世界的体验?重要的是,这种体验似乎涉及一种视觉“词汇”,使我们能够根据我们先前的知识来理解新的场景。研究人员探索这种视觉语言的本质,探索在用于场景感知的大脑机制中实现的特定计算。这项工作将来自最先进的计算机视觉系统的数据与人类神经成像相结合,以预测观看复杂的真实世界场景时的大脑反应,并分析和理解嵌入真实世界图像中的隐藏结构。这项工作对于建立我们如何能够看到的理论和改进机器视觉系统至关重要。更广泛地说,生物启发的视觉模型是必不可少的有效部署的智能技术在导航系统,辅助设备,安全验证,和视觉信息retrieval.The人工视觉系统在这项研究中采用的是高度数据驱动的,它是学习视觉世界不断“看”在万维网上的真实世界的图像。该模型被称为“NEIL”(Never Ending Image Learner,http://www.neil-kb.com/),利用尖端的大数据方法从数十万张图像中提取场景部分和关系的词汇表。这些词汇与人类视觉的相关性将通过功能性磁共振成像(fMRI)和脑磁图(MEG)神经成像进行测试。该假设是关于场景的先验知识的应用通过形成场景感知的词汇的特定部分和关系之间的学习关联来表达自己。此外,不同种类的关联可以在负责场景感知的功能性大脑网络的不同组件内实例化。总的来说,这项研究将建立在一个最近的,非常成功的人工视觉系统,以提供一个更明确的理论的部分和人类场景感知的基础关系。与此同时,该研究将提供有关计算衍生场景部分和关系的人类功能相关性的信息,从而有助于改进和改进人工视觉系统。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Michael Tarr其他文献
Michael Tarr的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Michael Tarr', 18)}}的其他基金
I-Corps: Using Neuroscience to Predict Consumer Preference
I-Corps:利用神经科学预测消费者偏好
- 批准号:
1216835 - 财政年份:2012
- 资助金额:
$ 46.32万 - 项目类别:
Standard Grant
Learning Minimal Representations for Visual Navigation and Recognition II
学习视觉导航和识别的最小表示 II
- 批准号:
0214383 - 财政年份:2003
- 资助金额:
$ 46.32万 - 项目类别:
Continuing Grant
COLLABORATIVE RESEARCH: Categorization and Expertise in Human Visual Cognition II
合作研究:人类视觉认知 II 的分类和专业知识
- 批准号:
0094491 - 财政年份:2001
- 资助金额:
$ 46.32万 - 项目类别:
Continuing Grant
Categorization and Expertise in Human Visual Cognition
人类视觉认知的分类和专业知识
- 批准号:
9615819 - 财政年份:1997
- 资助金额:
$ 46.32万 - 项目类别:
Continuing Grant
The Object Data Bank: A Collaborative Project Proposal to Provide a Standardized Realistic Stimulus Set of Common Objects for Experimental Psychology
对象数据库:为实验心理学提供一组标准化现实刺激的常见对象的合作项目提案
- 批准号:
9596200 - 财政年份:1995
- 资助金额:
$ 46.32万 - 项目类别:
Standard Grant
The Object Data Bank: A Collaborative Project Proposal to Provide a Standardized Realistic Stimulus Set of Common Objects for Experimental Psychology
对象数据库:为实验心理学提供一组标准化现实刺激的常见对象的合作项目提案
- 批准号:
9412456 - 财政年份:1994
- 资助金额:
$ 46.32万 - 项目类别:
Standard Grant
相似国自然基金
靶向Human ZAG蛋白的降糖小分子化合物筛选以及疗效观察
- 批准号:
- 批准年份:2025
- 资助金额:0.0 万元
- 项目类别:省市级项目
HBV S-Human ESPL1融合基因在慢性乙型肝炎发病进程中的分子机制研究
- 批准号:81960115
- 批准年份:2019
- 资助金额:34.0 万元
- 项目类别:地区科学基金项目
基于自适应表面肌电模型的下肢康复机器人“Human-in-Loop”控制研究
- 批准号:61005070
- 批准年份:2010
- 资助金额:20.0 万元
- 项目类别:青年科学基金项目
相似海外基金
Symmetry as a cue to object and scene representations in human visual cortex
对称性作为人类视觉皮层中物体和场景表征的线索
- 批准号:
RGPIN-2020-06104 - 财政年份:2022
- 资助金额:
$ 46.32万 - 项目类别:
Discovery Grants Program - Individual
Investigating scene processing in the human brain
研究人脑的场景处理
- 批准号:
BB/V003917/1 - 财政年份:2021
- 资助金额:
$ 46.32万 - 项目类别:
Research Grant
CHS: Small: DeepCrowd: A Crowd-assisted Deep Learning-based Disaster Scene Assessment System with Active Human-AI Interactions
CHS:小型:DeepCrowd:一种基于人群辅助、基于深度学习的灾难场景评估系统,具有主动人机交互功能
- 批准号:
2130263 - 财政年份:2021
- 资助金额:
$ 46.32万 - 项目类别:
Standard Grant
Symmetry as a cue to object and scene representations in human visual cortex
对称性作为人类视觉皮层中物体和场景表征的线索
- 批准号:
RGPIN-2020-06104 - 财政年份:2021
- 资助金额:
$ 46.32万 - 项目类别:
Discovery Grants Program - Individual
Investigating scene processing in the human brain
研究人脑的场景处理
- 批准号:
BB/V003887/1 - 财政年份:2021
- 资助金额:
$ 46.32万 - 项目类别:
Research Grant
CHS: Small: DeepCrowd: A Crowd-assisted Deep Learning-based Disaster Scene Assessment System with Active Human-AI Interactions
CHS:小型:DeepCrowd:一种基于人群辅助、基于深度学习的灾难场景评估系统,具有主动人机交互功能
- 批准号:
2008228 - 财政年份:2021
- 资助金额:
$ 46.32万 - 项目类别:
Standard Grant
Symmetry as a cue to object and scene representations in human visual cortex
对称性作为人类视觉皮层中物体和场景表征的线索
- 批准号:
DGECR-2020-00127 - 财政年份:2020
- 资助金额:
$ 46.32万 - 项目类别:
Discovery Launch Supplement
3D human tracking and scene reconstruction for audio-visual AR/VR
视听 AR/VR 的 3D 人体跟踪和场景重建
- 批准号:
2480933 - 财政年份:2020
- 资助金额:
$ 46.32万 - 项目类别:
Studentship
Symmetry as a cue to object and scene representations in human visual cortex
对称性作为人类视觉皮层中物体和场景表征的线索
- 批准号:
RGPIN-2020-06104 - 财政年份:2020
- 资助金额:
$ 46.32万 - 项目类别:
Discovery Grants Program - Individual
Places in the brain: Converging neural, behavioral, and developmental evidence for multiple systems in human visual scene processing
大脑中的位置:融合人类视觉场景处理中多个系统的神经、行为和发育证据
- 批准号:
10600985 - 财政年份:2019
- 资助金额:
$ 46.32万 - 项目类别: