Hierarchical cortical circuits implementing robust 3D visual perception
分层皮质电路实现强大的 3D 视觉感知
基本信息
- 批准号:10468723
- 负责人:
- 金额:$ 40.77万
- 依托单位:
- 依托单位国家:美国
- 项目类别:
- 财政年份:2018
- 资助国家:美国
- 起止时间:2018-09-01 至 2024-08-31
- 项目状态:已结题
- 来源:
- 关键词:3-Dimensional3D worldAnimalsAreaBehavioralBrainBrain regionClutteringsCodeComplexCuesDataDiscriminationDiseaseElectrophysiology (science)EnvironmentEtiologyEyeFaceFeedbackFrequenciesFutureGoalsHumanImageImpaired cognitionIndustryJointsKnowledgeMacacaMagnetic Resonance ImagingMeasuresMonkeysMotor outputNeuronsPathway interactionsPerceptionPositioning AttributeProcessReliability of ResultsResearchRetinaRobotSensorySignal TransductionStimulusStructureTestingUncertaintyVariantVisionVision DisparityVisualVisual PerceptionVisual system structureWeightWorkbaseexperimental studyfallsimaging approachimprovedinsightmonocularmovieneural circuitneural correlateneuroimagingneurophysiologyorientation selectivitypublic health relevancereceptive fieldrelating to nervous systemresponseretinal imagingretinotopicsample fixationstereoscopictheoriesthree dimensional structuretwo-dimensionalvirtual realityvisual information
项目摘要
PROJECT SUMMARY/ABSTRACT
How do we perceive the three-dimensional (3D) structure of the world when our eyes only sense two-dimensional
(2D) projections like a movie on a screen? Reconstructing 3D scene information from 2D retinal images is a
highly complex problem, made evident by the great difficulty robots have in turning visual inputs into appropriate
3D motor outputs to move physical chessmen on a cluttered board, even though they can beat the best human
chess players. The goal of this proposal is to elucidate how hierarchical cortical circuits implement robust (i.e.,
accurate & precise) 3D visual perception. Towards this end, we will answer two fundamental questions about
how the brain achieves the 2D-to-3D visual transformation using behavioral, electrophysiological, and neuro-
imaging approaches. In Aim 1, we will answer the question of how the visual system represents the spatial pose
(i.e., position & orientation) of objects in 3D space. Our hypothesis is that 3D scene information is reconstructed
within the V1 V3A CIP pathway. We will test this hypothesis by simultaneously recording 3D pose tuning
curves from V3A and CIP neurons in macaque monkeys while the animals perform an eight-alternative 3D
orientation discrimination task. This experiment will dissociate neural responses to 3D pose that reflect
elementary receptive field structures (resulting in 3D orientation preferences that vary with position-in-depth,
which we anticipate to find in V3A) from those that represent 3D object features (resulting in 3D orientation
preferences that are invariant to position-in-depth, which we anticipate to find in CIP). Using these data, we will
additionally test for functional correlates between neural activity in each area and perceptual sensitivity. Through
application of Granger Causality Analysis to simultaneous local field potential recordings in V3A and CIP, we will
further test for feedforward/feedback influences between the areas to evaluate their hierarchical structure. In
Aim 2, we will answer the question of how binocular disparity cues (differences in where an object's image falls
on each retina) and perspective cues (features resulting from 2D retinal projections of the 3D world) are
integrated at the perceptual and neuronal levels to achieve robust 3D visual representations. Both cues provide
valuable 3D scene information, and human perceptual studies show that their integration is dynamically
reweighted depending on the viewing conditions (i.e., position-in-depth & orientation-in-depth) to achieve robust
3D percepts. Specifically, greater weight is assigned to the more reliable cue based on the viewing conditions;
but, where and how this sophisticated integrative process is implemented in the brain is unknown. We anticipate
that V3A and CIP will each show sensitivity to both cue types, but only CIP will dynamically reweight the cues to
achieve robust 3D representations. This research is important for understanding ecologically relevant sensory
processing and neural computations that are required for us to successfully interact with our 3D environment.
Insights from this work will also extend beyond 3D vision by elucidating processes implemented by neural circuits
to solve highly nonlinear optimization problems that turn ambiguous sensory signals into robust perceptions.
项目摘要/摘要
当我们的眼睛只感知二维的时候,我们如何感知世界的三维结构
(2D)投影就像屏幕上的电影?从2D视网膜图像重建3D场景信息是一种
高度复杂的问题,表现在机器人将视觉输入转化为适当的
3D电机输出可以在杂乱的棋盘上移动物理棋子,即使它们可以击败最好的人类
棋手。该建议的目标是阐明分层皮质电路如何实现健壮(即,
准确的,精确的)3D视觉感知。为此,我们将回答以下两个基本问题:
大脑如何使用行为、电生理和神经实现从2D到3D的视觉转换-
成像方法。在目标1中,我们将回答视觉系统如何表示空间姿势的问题
(即,对象在3D空间中的位置和方向)。我们的假设是3D场景信息被重建
在V1V3ACIP通路内。我们将通过同时记录3D姿势调整来验证这一假设
猕猴的V3A和CIP神经元的曲线,当动物进行八种不同的3D
定向辨别任务。这项实验将分离神经对3D姿势的反应,
基本感受野结构(产生随深度位置变化的3D取向偏好,
我们期望在V3A中找到)来自表示3D对象特征(导致3D方向)的那些
与深度位置不变的偏好,我们预计会在CIP中找到)。使用这些数据,我们将
此外,每个区域的神经活动和知觉敏感度之间的功能相关性测试。穿过
将格兰杰因果分析应用于V3A和CIP的同时局域场势记录,我们将
进一步测试区域之间的前馈/反馈影响,以评估其层次结构。在……里面
目标2,我们将回答双目视差(物体图像落在哪里的差异)的问题
在每个视网膜上)和透视提示(由3D世界的2D视网膜投影产生的特征)是
在感知和神经元层面进行整合,以实现稳健的3D视觉表示。这两个线索都提供了
有价值的3D场景信息,以及人类的感知研究表明,它们的融合是动态的
根据观察条件(即深度位置和深度方向)重新加权,以实现稳健
3D感知。具体地,基于观看条件将更大的权重分配给更可靠的提示;
但是,这一复杂的整合过程是在哪里以及如何在大脑中实现的,目前尚不清楚。我们期待着
V3A和CIP将各自显示对这两种线索类型的敏感度,但只有CIP将动态地重新调整线索的权重
实现健壮的3D表示。这项研究对于理解与生态相关的感官很重要。
我们成功地与3D环境交互所需的处理和神经计算。
通过阐明神经回路实现的过程,这项工作的洞察力也将扩展到3D视觉之外
解决高度非线性的优化问题,将模糊的感觉信号转化为稳健的感知。
项目成果
期刊论文数量(0)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
数据更新时间:{{ journalArticles.updateTime }}
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Ari Rosenberg其他文献
Ari Rosenberg的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Ari Rosenberg', 18)}}的其他基金
Cortical processing of three-dimensional object-motion
三维物体运动的皮层处理
- 批准号:
10638729 - 财政年份:2023
- 资助金额:
$ 40.77万 - 项目类别:
Hierarchical cortical circuits implementing robust 3D visual perception
分层皮质电路实现强大的 3D 视觉感知
- 批准号:
9769032 - 财政年份:2018
- 资助金额:
$ 40.77万 - 项目类别:
Hierarchical cortical circuits implementing robust 3D visual perception
分层皮质电路实现强大的 3D 视觉感知
- 批准号:
10237226 - 财政年份:2018
- 资助金额:
$ 40.77万 - 项目类别:
Vestibular contribution to the encoding of object orientation relative to gravity
前庭对相对于重力的物体方向编码的贡献
- 批准号:
9174035 - 财政年份:2014
- 资助金额:
$ 40.77万 - 项目类别:
相似海外基金
Building joint models of language and the 3D world
构建语言和 3D 世界的联合模型
- 批准号:
RGPIN-2020-07196 - 财政年份:2022
- 资助金额:
$ 40.77万 - 项目类别:
Discovery Grants Program - Individual
CAREER: Learning to Perceive the Interactive 3D World from an Image
职业:学习从图像感知交互式 3D 世界
- 批准号:
2142529 - 财政年份:2022
- 资助金额:
$ 40.77万 - 项目类别:
Continuing Grant
Building joint models of language and the 3D world
构建语言和 3D 世界的联合模型
- 批准号:
RGPIN-2020-07196 - 财政年份:2021
- 资助金额:
$ 40.77万 - 项目类别:
Discovery Grants Program - Individual
Building joint models of language and the 3D world
构建语言和 3D 世界的联合模型
- 批准号:
DGECR-2020-00310 - 财政年份:2020
- 资助金额:
$ 40.77万 - 项目类别:
Discovery Launch Supplement
Building joint models of language and the 3D world
构建语言和 3D 世界的联合模型
- 批准号:
RGPIN-2020-07196 - 财政年份:2020
- 资助金额:
$ 40.77万 - 项目类别:
Discovery Grants Program - Individual
Updating head direction in a 3D world
更新 3D 世界中的头部方向
- 批准号:
2091869 - 财政年份:2018
- 资助金额:
$ 40.77万 - 项目类别:
Studentship
Intelligent 3D world building from mobile terrestrial LiDAR point clouds
利用移动地面 LiDAR 点云构建智能 3D 世界
- 批准号:
311923-2013 - 财政年份:2017
- 资助金额:
$ 40.77万 - 项目类别:
Discovery Grants Program - Individual
Intelligent 3D world building from mobile terrestrial LiDAR point clouds
利用移动地面 LiDAR 点云构建智能 3D 世界
- 批准号:
311923-2013 - 财政年份:2016
- 资助金额:
$ 40.77万 - 项目类别:
Discovery Grants Program - Individual
Intelligent 3D world building from mobile terrestrial LiDAR point clouds
利用移动地面 LiDAR 点云构建智能 3D 世界
- 批准号:
311923-2013 - 财政年份:2015
- 资助金额:
$ 40.77万 - 项目类别:
Discovery Grants Program - Individual
Intelligent 3D world building from mobile terrestrial LiDAR point clouds
利用移动地面 LiDAR 点云构建智能 3D 世界
- 批准号:
311923-2013 - 财政年份:2014
- 资助金额:
$ 40.77万 - 项目类别:
Discovery Grants Program - Individual














{{item.name}}会员




