CompCog: Template Contrast and Saliency (TCAS) Toolbox: a tool to visualize parallel attentive evaluation of scenes

CompCog:模板对比度和显着性 (TCAS) 工具箱:一种可视化场景并行注意力评估的工具

基本信息

  • 批准号:
    1921735
  • 负责人:
  • 金额:
    $ 65.69万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Standard Grant
  • 财政年份:
    2019
  • 资助国家:
    美国
  • 起止时间:
    2019-08-15 至 2023-07-31
  • 项目状态:
    已结题

项目摘要

One of the most common visual tasks humans do is use their eyes to find objects in the world around them. This task involves analyzing all the visual objects and backgrounds in the scene. This is a complicated task because the brain has to separate objects from the background. The brain also has to process the color, shape, and size of all objects. The aim of the research is to build a mathematical model that can find objects in scenes, despite the difficulty of the problem. The model is inspired by the visual system. It uses two ways to process information. First, it uses central vision to get a fine-grained analysis of the object it is looking at. Second, it also uses peripheral vision, which is the area around and away from central vision. Peripheral vision can analyze several objects at the same time but is less precise than central vision. The ultimate goal of the project is to develop a free, open-source software toolbox that anyone can use. The toolbox will visualize how the visual system processes complex scenes. It will determine which regions in a scene should be ignored and which regions the eyes should focus on. One strength of the proposal is that it makes specific predictions that can be tested in various fields of neuroscience. It might also lead to improvements in visual aids for visually impaired individuals because it can guide users toward areas in a scene that are likely to contain the target object.The starting point for the proposed work is a mathematically explicit model of goal-directed visual processing. The model incorporates two components of visual complexity: a parameter that measures the visual difference between objects in the scene and the object the observer is looking for (the target) and a parameter that measures how similar objects in the scene are to one another. The preliminary work indicated that the model is very capable of predicting how long it will take observers to find targets in visually complex scenes. The first two goals of the present research aim at evaluating other components of visual complexity to improve the model and its ability to predict visual processing in more complex visual scenes. The experiments in Goals 1 and 2 will help determine how to combine the visual qualities of objects (such as color, shape and texture) as well as how to account for the contrast between objects and their background. Results from Goals 1 and 2 will directly guide the development of a computational toolbox. The toolbox will allow users to visualize visual processing of simple and complex scenes and make predictions about where observers are likely to move their eyes as a function of their current goals (freely inspect the scene or find a specific object within it). The proposed work combines behavioral psychophysics and computational simulations (Goals 1 and 2), toolbox implementation and eye-tracking validation (Goal 3). The merits of the toolbox include the fact that: 1) it combines different types of visual processing (visual conspicuity contrast and target template contrast), 2) it can predict eye movements over different time scales, and 3) it can evaluate the contribution of these two types of processing to performance. This implementation is important because the contribution of these two processes is known to vary as a function of search goals (free-view vs. goal-directed) and search strategy adopted by observers (active search vs. passive search). Finally, another innovation of the toolbox is that it will be able to make predictions when targets are only defined in abstract terms, that is, when observers only have vague descriptions about the item they are supposed to find in the scene, which is particularly challenging for current computer vision systems to achieve.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
人类最常见的视觉任务之一就是用眼睛寻找周围世界的物体。此任务涉及分析场景中的所有视觉对象和背景。这是一项复杂的任务,因为大脑必须将物体从背景中分离出来。大脑还必须处理所有物体的颜色、形状和大小。这项研究的目的是建立一个数学模型,可以在场景中找到对象,尽管问题很困难。该模型的灵感来自视觉系统。它使用两种方式来处理信息。首先,它使用中央视觉来获得它正在查看的对象的细粒度分析。其次,它还使用周边视觉,即中心视觉周围和远离中心视觉的区域。周边视觉可以同时分析多个物体,但不如中央视觉精确。该项目的最终目标是开发一个任何人都可以使用的免费、开源软件工具箱。工具箱将可视化视觉系统如何处理复杂场景。它将决定场景中的哪些区域应该被忽略,哪些区域应该被眼睛关注。该提议的一个优点是,它做出了可以在神经科学的各个领域进行测试的具体预测。它也可能导致视觉障碍人士的视觉辅助工具的改进,因为它可以引导用户在一个场景中,可能包含的目标object.The出发点为拟议的工作是一个数学明确的目标导向的视觉处理模型。该模型结合了视觉复杂性的两个组成部分:一个参数,测量场景中的对象和观察者正在寻找的对象(目标)之间的视觉差异,以及一个参数,测量场景中的对象彼此之间的相似程度。初步工作表明,该模型非常能够预测观察者在视觉复杂场景中找到目标需要多长时间。本研究的前两个目标旨在评估视觉复杂性的其他组成部分,以提高模型及其预测更复杂视觉场景中视觉处理的能力。目标1和目标2中的实验将有助于确定如何将物体的视觉质量(如颜色、形状和纹理)联合收割机结合起来,以及如何考虑物体与其背景之间的对比度。目标1和2的结果将直接指导计算工具箱的开发。该工具箱将允许用户可视化简单和复杂场景的视觉处理,并预测观察者可能根据其当前目标(自由检查场景或在其中找到特定对象)移动眼睛的位置。拟议的工作结合了行为心理物理学和计算模拟(目标1和2),工具箱实现和眼动跟踪验证(目标3)。该工具箱的优点包括:1)它结合了不同类型的视觉处理(视觉显著性对比和目标模板对比),2)它可以预测不同时间尺度上的眼球运动,3)它可以评估这两种类型的处理对性能的贡献。这种实现是重要的,因为这两个过程的贡献是已知的,以不同的搜索目标(自由视图与目标导向)和搜索策略的观察者(主动搜索与被动搜索)。最后,工具箱的另一个创新是,当目标仅以抽象术语定义时,即当观察者对他们应该在场景中找到的项目只有模糊描述时,它将能够进行预测,这对于当前的计算机视觉系统来说尤其具有挑战性。该奖项反映了NSF的法定使命,并且通过使用基金会的学术价值和更广泛的影响审查标准。

项目成果

期刊论文数量(7)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Prioritization in visual attention does not work the way you think it does.
视觉注意力的优先顺序并不像你想象的那样有效。
Predicting how color and shape combine in the human visual system to direct attention
  • DOI:
    10.1038/s41598-019-56238-9
  • 发表时间:
    2019-12-30
  • 期刊:
  • 影响因子:
    4.6
  • 作者:
    Buetti, Simona;Xu, Jing;Lleras, Alejandro
  • 通讯作者:
    Lleras, Alejandro
Incorporating the properties of peripheral vision into theories of visual search
将周边视觉的特性纳入视觉搜索理论
  • DOI:
    10.1038/s44159-022-00097-1
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Lleras, Alejandro;Buetti, Simona;Xu, Zoe Jing
  • 通讯作者:
    Xu, Zoe Jing
Distractor–distractor interactions in visual search for oriented targets explain the increased difficulty observed in nonlinearly separable conditions.
视觉搜索定向目标时的干扰因素与干扰因素的相互作用解释了在非线性可分离条件下观察到的难度增加。
Complex background information slows down parallel search efficiency by reducing the strength of interitem interactions.
复杂的背景信息会降低项间交互的强度,从而降低并行搜索效率。
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Simona Buetti其他文献

Correction to: A target contrast signal theory of parallel processing in goal-directed search
  • DOI:
    10.3758/s13414-020-02000-7
  • 发表时间:
    2020-02-21
  • 期刊:
  • 影响因子:
    1.700
  • 作者:
    Alejandro Lleras;Zhiyuan Wang;Gavin Jun Peng Ng;Kirk Ballew;Jing Xu;Simona Buetti
  • 通讯作者:
    Simona Buetti
Color-color feature guidance in visual search
  • DOI:
    10.3758/s13414-025-03055-0
  • 发表时间:
    2025-04-28
  • 期刊:
  • 影响因子:
    1.700
  • 作者:
    Yiwen Wang;Simona Buetti;Andrea Yaoyun Cui;Alejandro Lleras
  • 通讯作者:
    Alejandro Lleras

Simona Buetti的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

相似海外基金

Template synthesis and characterization of single-walled inorganic nanotubes
单壁无机纳米管的模板合成与表征
  • 批准号:
    23H01807
  • 财政年份:
    2023
  • 资助金额:
    $ 65.69万
  • 项目类别:
    Grant-in-Aid for Scientific Research (B)
CAS: Template Directed Synthesis of Earth Abundant Metal Oxide and Chalcogenide Nanoshells
CAS:地球丰富的金属氧化物和硫属化物纳米壳的模板定向合成
  • 批准号:
    2304999
  • 财政年份:
    2023
  • 资助金额:
    $ 65.69万
  • 项目类别:
    Standard Grant
Functionalization of the helical template peptides for development of medium sized peptides drugs.
螺旋模板肽的功能化,用于开发中等大小的肽药物。
  • 批准号:
    23K06043
  • 财政年份:
    2023
  • 资助金额:
    $ 65.69万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
Downregulation of neutrophil extracellular traps by fibrous regeneration template design.
通过纤维再生模板设计下调中性粒细胞胞外陷阱。
  • 批准号:
    10654151
  • 财政年份:
    2023
  • 资助金额:
    $ 65.69万
  • 项目类别:
Synthesis of porous solids via a complex molecular template approach
通过复杂的分子模板方法合成多孔固体
  • 批准号:
    575733-2022
  • 财政年份:
    2022
  • 资助金额:
    $ 65.69万
  • 项目类别:
    Alexander Graham Bell Canada Graduate Scholarships - Master's
ERI: In-Situ Fabrication of Dual-Template Imprinted Nanocomposites for Simultaneous Detection of Glucose and Cortisol
ERI:原位制造双模板印迹纳米复合材料,用于同时检测葡萄糖和皮质醇
  • 批准号:
    2138523
  • 财政年份:
    2022
  • 资助金额:
    $ 65.69万
  • 项目类别:
    Standard Grant
Preparation for Transforming Oral Health Prevention throughout Canada: two historical studies, a survey of elected officials and a template for action
为改变整个加拿大的口腔健康预防做准备:两项历史研究、对民选官员的调查和行动模板
  • 批准号:
    468878
  • 财政年份:
    2022
  • 资助金额:
    $ 65.69万
  • 项目类别:
    Operating Grants
Modularly built, complete, coordinate- and template-free brain atlases
模块化构建、完整、无坐标和模板的大脑图谱
  • 批准号:
    10570256
  • 财政年份:
    2022
  • 资助金额:
    $ 65.69万
  • 项目类别:
Modularly built, complete, coordinate- and template-free brain atlases
模块化构建、完整、无坐标和模板的大脑图谱
  • 批准号:
    10467697
  • 财政年份:
    2022
  • 资助金额:
    $ 65.69万
  • 项目类别:
Fine Synthesis of Metal Nanoparticle Catalysts Using Supramolecule as an Template
以超分子为模板精细合成金属纳米颗粒催化剂
  • 批准号:
    22K19020
  • 财政年份:
    2022
  • 资助金额:
    $ 65.69万
  • 项目类别:
    Grant-in-Aid for Challenging Research (Exploratory)
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了