Collaborative Research: Modeling Perception and Memory: Studies in Priming

合作研究:感知和记忆建模:启动研究

基本信息

  • 批准号:
    0840998
  • 负责人:
  • 金额:
    $ 26.96万
  • 依托单位:
  • 依托单位国家:
    美国
  • 项目类别:
    Standard Grant
  • 财政年份:
    2009
  • 资助国家:
    美国
  • 起止时间:
    2009-09-01 至 2011-08-31
  • 项目状态:
    已结题

项目摘要

Collaborative Research: Modeling Perception and Memory: Studies in PrimingDavid E. Huber, University of California-San DiegoRichard M. Shiffrin, Indiana UniversityThis award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5). It is said that "seeing is believing", and we take it for granted that vision operates efficiently and accurately. This suggests that vision is easy. However, failed attempts at producing computer vision demonstrate exactly the opposite--vision is perhaps the most difficult operation performed by the brain, requiring one third of the neocortex. The NSF-funded research project being conducted by David Huber at the University of California, San Diego and Richard Shiffrin at Indiana University focuses on an important question in visual perception: How is it that we can keep separate what we are currently viewing from that which came immediately before? In truth, vision is constantly "blurring" together information over time, such as when viewing the smooth motion at the cinema that is produced by a sequence of still images shown in rapid succession. However, while reading, our eyes constantly move from one word to the next, and yet unlike a movie, we see each word separately and do not confuse it with the previous words. To accomplish this, the brain must have a trick for deciding when the previous image should be combined with the next image and when each should be kept separate. Huber and Shiffrin hypothesize that the process of identifying each word or each movie image causes it to be suppressed so as to reduce inappropriate blending with the next word or image. In the case of a movie, the images appear too briefly, and the blending produces apparent movement. In the case of reading, our eyes dwell on each word exactly the right amount of time to fully identify and suppress each word so as to reduce confusion with the next word. Huber and Shiffrin investigate this ability to separate visual images in a variety of tasks, including reading, face identification, and rapid detection of change, to name just a few examples. If their hypothesis is correct, manipulating the timing of stimuli should produce analogous behavioral effects in all of these situations. Beyond laboratory studies, this hypothesis may also improve computer vision systems in situations requiring rapid identification. For instance, computer controlled cameras at the airport might be used to identify faces of suspects, but this requires separating one face from another when there is a crowd of faces moving quickly past the camera. The results of this research may also be relevant to disorders such as autism, schizophrenia, and dyslexia, which often involve a component of distorted or abnormal perception. For instance, one account of dyslexia suggests that reading difficulties arise from an inappropriate blending of letters and words. Understanding the manner in which the brain separates visual information over time may help with the diagnosis, interpretation, and treatment of these perceptual deficits.The human perceptual system receives a constant stream of continually changing information. For example, the eyes move several times each second, providing different views of different objects or words. This project investigates the dynamic process of separating in time and space information pertaining to previous sources (e.g., a previously viewed word) from information pertaining to the current source (e.g., the currently viewed word). Behavioral studies will address the process of discounting that serves to reduce perceptual separation errors due to source confusion. This discounting process can be understood at multiple levels of description and the proposed experiments test complimentary and related mathematical models at the causal and neural levels of analysis. Two causal models use Bayesian statistical techniques and focus on optimizing perception in a noisy world perceived with a limited capacity processing system; discounting is implemented as "explaining away" between competing sources. The neural model implements discounting through habituation that arises with the transient depletion of synaptic resources. In combination, these models demonstrate why perceptual discounting exists and the particular manner in which it is implemented. A wide variety of experimental paradigms involve the rapid presentation of visual objects and the proposed studies use these models to investigate whether perceptual source confusion and discounting may provide a unified account of these phenomena. Besides visual short-term priming with words, the proposed studies examine the popular perceptual and cognitive paradigms of repetition blindness, flanker effects, the attentional blink, negative priming, semantic satiation, and affective priming. All of these paradigms involve presenting a picture, word, or symbol on a computer screen followed by a second presentation that is either identical, positively related, or negatively related to the first presentation. An important goal of this endeavor is to provide a unified account of these perceptual phenomena that are currently considered in isolation by researchers.
合作研究:建模感知和记忆:priming研究david E. Huber,加州大学圣地亚哥分校,richard M. Shiffrin,印第安纳大学该奖项是根据2009年美国复苏和再投资法案(公法111-5)资助的。俗话说“眼见为实”,我们想当然地认为视觉运作高效、准确。这表明视觉是很容易的。然而,制造计算机视觉的失败尝试恰恰证明了相反的情况——视觉可能是大脑最难完成的操作,需要三分之一的新皮层。这项由美国国家科学基金会资助的研究项目由加州大学圣地亚哥分校的David Huber和印第安纳大学的Richard Shiffrin进行,研究重点是视觉感知中的一个重要问题:我们是如何将当前看到的东西与之前看到的东西区分开来的?事实上,随着时间的推移,视觉会不断地将信息“模糊”在一起,比如在电影院观看由一系列快速连续播放的静止图像产生的平滑运动。然而,在阅读时,我们的眼睛不断地从一个单词移动到下一个单词,但与电影不同的是,我们单独看到每个单词,不会将其与前一个单词混淆。为了做到这一点,大脑必须有一个技巧来决定什么时候应该把前一个图像和下一个图像结合起来,什么时候应该把每个图像分开。Huber和Shiffrin假设,识别每个单词或每个电影图像的过程会使其被抑制,以减少与下一个单词或图像的不适当混合。在电影中,图像出现得太短暂,混合产生了明显的运动。在阅读时,我们的眼睛停留在每个单词上的时间正好合适,以充分识别和抑制每个单词,以减少与下一个单词的混淆。Huber和Shiffrin研究了在各种任务中分离视觉图像的能力,包括阅读、面部识别和快速检测变化,仅举几个例子。如果他们的假设是正确的,操纵刺激的时间应该在所有这些情况下产生类似的行为效果。除了实验室研究之外,这一假设还可以在需要快速识别的情况下改进计算机视觉系统。例如,机场的计算机控制摄像机可以用来识别嫌疑人的面孔,但这需要在有一群面孔快速经过摄像机时将一张面孔与另一张面孔区分开来。这项研究的结果也可能与自闭症、精神分裂症和阅读障碍等疾病有关,这些疾病通常涉及扭曲或异常感知的组成部分。例如,一项关于阅读障碍的研究表明,阅读困难源于字母和单词的不恰当混合。了解大脑在一段时间内分离视觉信息的方式可能有助于这些感知缺陷的诊断、解释和治疗。人类的感知系统接收不断变化的信息流。例如,眼睛每秒钟移动几次,为不同的物体或单词提供不同的视角。这个项目研究了在时间和空间上从当前来源(例如当前查看的单词)中分离与先前来源(例如先前查看的单词)相关的信息的动态过程。行为研究将解决贴现过程,以减少由于来源混淆而导致的感知分离错误。这种贴现过程可以在多个描述层次上理解,所提出的实验在因果分析和神经分析的层次上测试互补和相关的数学模型。两个因果模型使用贝叶斯统计技术,并专注于在有限容量处理系统感知的嘈杂世界中优化感知;折扣是作为竞争来源之间的“解释”而实现的。神经模型通过习惯化实现贴现,这种习惯化伴随着突触资源的短暂耗尽而产生。综上所述,这些模型证明了为什么存在感知折扣以及它实现的特定方式。各种各样的实验范式涉及视觉对象的快速呈现,提出的研究使用这些模型来调查感知源混淆和折扣是否可以提供这些现象的统一解释。除了单词视觉短期启动外,本研究还探讨了重复盲视、侧翼效应、注意眨眼、消极启动、语义满足和情感启动等常见的知觉和认知范式。所有这些范例都涉及在计算机屏幕上呈现一个图片、单词或符号,然后是与第一次呈现相同、正相关或负相关的第二次呈现。这一努力的一个重要目标是为这些目前被研究人员孤立考虑的感知现象提供一个统一的解释。

项目成果

期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

Richard Shiffrin其他文献

Richard Shiffrin的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('Richard Shiffrin', 18)}}的其他基金

Student Travel Awards to the Sackler Colloquium: Brain Produces Mind by Modeling, May 1-3, 2019, Irvine, CA
萨克勒研讨会学生旅行奖:大脑通过建模产生思维,2019 年 5 月 1-3 日,加利福尼亚州欧文
  • 批准号:
    1913737
  • 财政年份:
    2019
  • 资助金额:
    $ 26.96万
  • 项目类别:
    Standard Grant
Conference: Drawing Causal Inference from Big Data
会议:从大数据中得出因果推论
  • 批准号:
    1430441
  • 财政年份:
    2015
  • 资助金额:
    $ 26.96万
  • 项目类别:
    Standard Grant
An Undergraduate Curriculum for Cognitive and Information Sciences
认知与信息科学本科课程
  • 批准号:
    9752299
  • 财政年份:
    1998
  • 资助金额:
    $ 26.96万
  • 项目类别:
    Standard Grant
Processing Visual Information from Unattended Locations
处理无人值守位置的视觉信息
  • 批准号:
    9512089
  • 财政年份:
    1995
  • 资助金额:
    $ 26.96万
  • 项目类别:
    Continuing Grant
Controlled and Automatic Information Processing
受控和自动信息处理
  • 批准号:
    7700156
  • 财政年份:
    1977
  • 资助金额:
    $ 26.96万
  • 项目类别:
    Standard Grant

相似国自然基金

Research on Quantum Field Theory without a Lagrangian Description
  • 批准号:
    24ZR1403900
  • 批准年份:
    2024
  • 资助金额:
    0.0 万元
  • 项目类别:
    省市级项目
Cell Research
  • 批准号:
    31224802
  • 批准年份:
    2012
  • 资助金额:
    24.0 万元
  • 项目类别:
    专项基金项目
Cell Research
  • 批准号:
    31024804
  • 批准年份:
    2010
  • 资助金额:
    24.0 万元
  • 项目类别:
    专项基金项目
Cell Research (细胞研究)
  • 批准号:
    30824808
  • 批准年份:
    2008
  • 资助金额:
    24.0 万元
  • 项目类别:
    专项基金项目
Research on the Rapid Growth Mechanism of KDP Crystal
  • 批准号:
    10774081
  • 批准年份:
    2007
  • 资助金额:
    45.0 万元
  • 项目类别:
    面上项目

相似海外基金

Collaborative Research: Ionospheric Density Response to American Solar Eclipses Using Coordinated Radio Observations with Modeling Support
合作研究:利用协调射电观测和建模支持对美国日食的电离层密度响应
  • 批准号:
    2412294
  • 财政年份:
    2024
  • 资助金额:
    $ 26.96万
  • 项目类别:
    Standard Grant
Collaborative Research: CDS&E: data-enabled dynamic microstructural modeling of flowing complex fluids
合作研究:CDS
  • 批准号:
    2347345
  • 财政年份:
    2024
  • 资助金额:
    $ 26.96万
  • 项目类别:
    Standard Grant
Collaborative Research: Using Polarimetric Radar Observations, Cloud Modeling, and In Situ Aircraft Measurements for Large Hail Detection and Warning of Impending Hail
合作研究:利用偏振雷达观测、云建模和现场飞机测量来检测大冰雹并预警即将发生的冰雹
  • 批准号:
    2344259
  • 财政年份:
    2024
  • 资助金额:
    $ 26.96万
  • 项目类别:
    Standard Grant
Collaborative Research: Enabling Cloud-Permitting and Coupled Climate Modeling via Nonhydrostatic Extensions of the CESM Spectral Element Dynamical Core
合作研究:通过 CESM 谱元动力核心的非静水力扩展实现云允许和耦合气候建模
  • 批准号:
    2332469
  • 财政年份:
    2024
  • 资助金额:
    $ 26.96万
  • 项目类别:
    Continuing Grant
NSF-BSF: Collaborative Research: Solids and reactive transport processes in sewer systems of the future: modeling and experimental investigation
NSF-BSF:合作研究:未来下水道系统中的固体和反应性输送过程:建模和实验研究
  • 批准号:
    2134594
  • 财政年份:
    2024
  • 资助金额:
    $ 26.96万
  • 项目类别:
    Standard Grant
Collaborative Research: Connecting the Past, Present, and Future Climate of the Lake Victoria Basin using High-Resolution Coupled Modeling
合作研究:使用高分辨率耦合建模连接维多利亚湖盆地的过去、现在和未来气候
  • 批准号:
    2323649
  • 财政年份:
    2024
  • 资助金额:
    $ 26.96万
  • 项目类别:
    Standard Grant
Collaborative Research: SaTC: CORE: Medium: Differentially Private SQL with flexible privacy modeling, machine-checked system design, and accuracy optimization
协作研究:SaTC:核心:中:具有灵活隐私建模、机器检查系统设计和准确性优化的差异化私有 SQL
  • 批准号:
    2317232
  • 财政年份:
    2024
  • 资助金额:
    $ 26.96万
  • 项目类别:
    Continuing Grant
Collaborative Research: NSFGEO-NERC: Advancing capabilities to model ultra-low velocity zone properties through full waveform Bayesian inversion and geodynamic modeling
合作研究:NSFGEO-NERC:通过全波形贝叶斯反演和地球动力学建模提高超低速带特性建模能力
  • 批准号:
    2341238
  • 财政年份:
    2024
  • 资助金额:
    $ 26.96万
  • 项目类别:
    Standard Grant
Collaborative Research: Using Polarimetric Radar Observations, Cloud Modeling, and In Situ Aircraft Measurements for Large Hail Detection and Warning of Impending Hail
合作研究:利用偏振雷达观测、云建模和现场飞机测量来检测大冰雹并预警即将发生的冰雹
  • 批准号:
    2344260
  • 财政年份:
    2024
  • 资助金额:
    $ 26.96万
  • 项目类别:
    Standard Grant
Collaborative Research: CDS&E: data-enabled dynamic microstructural modeling of flowing complex fluids
合作研究:CDS
  • 批准号:
    2347344
  • 财政年份:
    2024
  • 资助金额:
    $ 26.96万
  • 项目类别:
    Standard Grant
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了